Feb 14 04:09:26 crc systemd[1]: Starting Kubernetes Kubelet... Feb 14 04:09:26 crc restorecon[4704]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:26 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 04:09:27 crc restorecon[4704]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 14 04:09:27 crc restorecon[4704]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 14 04:09:28 crc kubenswrapper[4867]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 04:09:28 crc kubenswrapper[4867]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 14 04:09:28 crc kubenswrapper[4867]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 04:09:28 crc kubenswrapper[4867]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 04:09:28 crc kubenswrapper[4867]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 14 04:09:28 crc kubenswrapper[4867]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.686598 4867 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691200 4867 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691230 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691239 4867 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691248 4867 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691255 4867 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691264 4867 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691272 4867 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691278 4867 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691285 4867 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691292 4867 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691301 4867 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691307 4867 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691313 4867 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691321 4867 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691327 4867 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691352 4867 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691359 4867 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691366 4867 feature_gate.go:330] unrecognized feature gate: Example Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691372 4867 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691378 4867 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691385 4867 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691391 4867 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691401 4867 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691410 4867 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691417 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691424 4867 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691434 4867 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691444 4867 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691451 4867 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691459 4867 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691466 4867 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691473 4867 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691479 4867 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691485 4867 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691491 4867 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691497 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691530 4867 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691539 4867 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691547 4867 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691553 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691559 4867 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691565 4867 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691575 4867 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691585 4867 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691592 4867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691601 4867 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691609 4867 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691617 4867 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691623 4867 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691629 4867 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691635 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691642 4867 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691648 4867 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691654 4867 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691660 4867 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691666 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691671 4867 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691678 4867 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691685 4867 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691692 4867 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691698 4867 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691704 4867 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691711 4867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691717 4867 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691723 4867 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691730 4867 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691736 4867 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691746 4867 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691754 4867 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691761 4867 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.691768 4867 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693634 4867 flags.go:64] FLAG: --address="0.0.0.0" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693706 4867 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693728 4867 flags.go:64] FLAG: --anonymous-auth="true" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693739 4867 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693752 4867 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693761 4867 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693772 4867 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693792 4867 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693800 4867 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693807 4867 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693815 4867 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693823 4867 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693830 4867 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693838 4867 flags.go:64] FLAG: --cgroup-root="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693844 4867 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693852 4867 flags.go:64] FLAG: --client-ca-file="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693859 4867 flags.go:64] FLAG: --cloud-config="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693866 4867 flags.go:64] FLAG: --cloud-provider="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693872 4867 flags.go:64] FLAG: --cluster-dns="[]" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693883 4867 flags.go:64] FLAG: --cluster-domain="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693890 4867 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693898 4867 flags.go:64] FLAG: --config-dir="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693905 4867 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693914 4867 flags.go:64] FLAG: --container-log-max-files="5" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693927 4867 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693934 4867 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693941 4867 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693948 4867 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693955 4867 flags.go:64] FLAG: --contention-profiling="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693961 4867 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693968 4867 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693976 4867 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693982 4867 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.693994 4867 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694000 4867 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694006 4867 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694013 4867 flags.go:64] FLAG: --enable-load-reader="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694020 4867 flags.go:64] FLAG: --enable-server="true" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694027 4867 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694036 4867 flags.go:64] FLAG: --event-burst="100" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694043 4867 flags.go:64] FLAG: --event-qps="50" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694052 4867 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694059 4867 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694067 4867 flags.go:64] FLAG: --eviction-hard="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694076 4867 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694083 4867 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694089 4867 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694096 4867 flags.go:64] FLAG: --eviction-soft="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694102 4867 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694109 4867 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694115 4867 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694122 4867 flags.go:64] FLAG: --experimental-mounter-path="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694128 4867 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694134 4867 flags.go:64] FLAG: --fail-swap-on="true" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694141 4867 flags.go:64] FLAG: --feature-gates="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694149 4867 flags.go:64] FLAG: --file-check-frequency="20s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694158 4867 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694165 4867 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694172 4867 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694179 4867 flags.go:64] FLAG: --healthz-port="10248" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694186 4867 flags.go:64] FLAG: --help="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694192 4867 flags.go:64] FLAG: --hostname-override="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694198 4867 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694206 4867 flags.go:64] FLAG: --http-check-frequency="20s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694212 4867 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694218 4867 flags.go:64] FLAG: --image-credential-provider-config="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694225 4867 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694232 4867 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694238 4867 flags.go:64] FLAG: --image-service-endpoint="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694244 4867 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694251 4867 flags.go:64] FLAG: --kube-api-burst="100" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694258 4867 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694265 4867 flags.go:64] FLAG: --kube-api-qps="50" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694271 4867 flags.go:64] FLAG: --kube-reserved="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694277 4867 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694283 4867 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694290 4867 flags.go:64] FLAG: --kubelet-cgroups="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694296 4867 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694303 4867 flags.go:64] FLAG: --lock-file="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694310 4867 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694317 4867 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694324 4867 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694337 4867 flags.go:64] FLAG: --log-json-split-stream="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694344 4867 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694352 4867 flags.go:64] FLAG: --log-text-split-stream="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694358 4867 flags.go:64] FLAG: --logging-format="text" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694365 4867 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694372 4867 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694379 4867 flags.go:64] FLAG: --manifest-url="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694385 4867 flags.go:64] FLAG: --manifest-url-header="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694396 4867 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694404 4867 flags.go:64] FLAG: --max-open-files="1000000" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694412 4867 flags.go:64] FLAG: --max-pods="110" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694419 4867 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694426 4867 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694434 4867 flags.go:64] FLAG: --memory-manager-policy="None" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694442 4867 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694451 4867 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694458 4867 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694466 4867 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694490 4867 flags.go:64] FLAG: --node-status-max-images="50" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694497 4867 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694527 4867 flags.go:64] FLAG: --oom-score-adj="-999" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694534 4867 flags.go:64] FLAG: --pod-cidr="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694540 4867 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694555 4867 flags.go:64] FLAG: --pod-manifest-path="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694563 4867 flags.go:64] FLAG: --pod-max-pids="-1" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694571 4867 flags.go:64] FLAG: --pods-per-core="0" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694579 4867 flags.go:64] FLAG: --port="10250" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694587 4867 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694594 4867 flags.go:64] FLAG: --provider-id="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694602 4867 flags.go:64] FLAG: --qos-reserved="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694610 4867 flags.go:64] FLAG: --read-only-port="10255" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694618 4867 flags.go:64] FLAG: --register-node="true" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694625 4867 flags.go:64] FLAG: --register-schedulable="true" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694633 4867 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694649 4867 flags.go:64] FLAG: --registry-burst="10" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694655 4867 flags.go:64] FLAG: --registry-qps="5" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694662 4867 flags.go:64] FLAG: --reserved-cpus="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694669 4867 flags.go:64] FLAG: --reserved-memory="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694680 4867 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694687 4867 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694694 4867 flags.go:64] FLAG: --rotate-certificates="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694701 4867 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694707 4867 flags.go:64] FLAG: --runonce="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694713 4867 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694720 4867 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694727 4867 flags.go:64] FLAG: --seccomp-default="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694734 4867 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694740 4867 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694747 4867 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694754 4867 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694763 4867 flags.go:64] FLAG: --storage-driver-password="root" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694769 4867 flags.go:64] FLAG: --storage-driver-secure="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694776 4867 flags.go:64] FLAG: --storage-driver-table="stats" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694783 4867 flags.go:64] FLAG: --storage-driver-user="root" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694789 4867 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694797 4867 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694803 4867 flags.go:64] FLAG: --system-cgroups="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694809 4867 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694821 4867 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694828 4867 flags.go:64] FLAG: --tls-cert-file="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694834 4867 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694843 4867 flags.go:64] FLAG: --tls-min-version="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694849 4867 flags.go:64] FLAG: --tls-private-key-file="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694856 4867 flags.go:64] FLAG: --topology-manager-policy="none" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694864 4867 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694871 4867 flags.go:64] FLAG: --topology-manager-scope="container" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694879 4867 flags.go:64] FLAG: --v="2" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694892 4867 flags.go:64] FLAG: --version="false" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694903 4867 flags.go:64] FLAG: --vmodule="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694913 4867 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.694920 4867 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695150 4867 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695157 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695162 4867 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695169 4867 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695175 4867 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695181 4867 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695187 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695192 4867 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695200 4867 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695206 4867 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695213 4867 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695219 4867 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695224 4867 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695232 4867 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695239 4867 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695247 4867 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695253 4867 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695258 4867 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695263 4867 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695269 4867 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695274 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695280 4867 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695285 4867 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695290 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695295 4867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695304 4867 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695310 4867 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695316 4867 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695321 4867 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695327 4867 feature_gate.go:330] unrecognized feature gate: Example Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695332 4867 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695337 4867 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695354 4867 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695359 4867 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695365 4867 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695370 4867 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695376 4867 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695381 4867 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695386 4867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695392 4867 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695397 4867 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695412 4867 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695417 4867 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695422 4867 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695428 4867 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695433 4867 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695439 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695444 4867 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695450 4867 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695455 4867 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695461 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695466 4867 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695472 4867 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695477 4867 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695482 4867 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695489 4867 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695494 4867 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695500 4867 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695524 4867 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695529 4867 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695535 4867 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695540 4867 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695545 4867 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695552 4867 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695561 4867 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695566 4867 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695574 4867 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695582 4867 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695589 4867 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695596 4867 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.695603 4867 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.695626 4867 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.719386 4867 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.719429 4867 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719538 4867 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719550 4867 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719557 4867 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719563 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719570 4867 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719575 4867 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719583 4867 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719591 4867 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719597 4867 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719603 4867 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719609 4867 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719615 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719620 4867 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719625 4867 feature_gate.go:330] unrecognized feature gate: Example Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719630 4867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719635 4867 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719640 4867 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719645 4867 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719650 4867 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719655 4867 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719660 4867 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719665 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719670 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719696 4867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719701 4867 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719705 4867 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719710 4867 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719715 4867 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719720 4867 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719725 4867 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719730 4867 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719735 4867 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719740 4867 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719747 4867 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719753 4867 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719759 4867 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719765 4867 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719772 4867 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719777 4867 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719783 4867 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719797 4867 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719802 4867 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719807 4867 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719846 4867 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719854 4867 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719860 4867 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719865 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719870 4867 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719875 4867 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719881 4867 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719886 4867 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719892 4867 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719898 4867 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719904 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719909 4867 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719914 4867 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719919 4867 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719924 4867 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719929 4867 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719946 4867 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719951 4867 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719956 4867 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719961 4867 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719966 4867 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719970 4867 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719977 4867 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719982 4867 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719987 4867 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719992 4867 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.719998 4867 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720005 4867 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.720014 4867 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720269 4867 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720277 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720283 4867 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720290 4867 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720295 4867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720300 4867 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720305 4867 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720310 4867 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720315 4867 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720320 4867 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720325 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720329 4867 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720334 4867 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720340 4867 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720345 4867 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720349 4867 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720354 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720359 4867 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720364 4867 feature_gate.go:330] unrecognized feature gate: Example Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720369 4867 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720374 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720378 4867 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720383 4867 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720397 4867 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720402 4867 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720409 4867 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720414 4867 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720421 4867 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720427 4867 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720432 4867 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720437 4867 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720442 4867 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720448 4867 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720454 4867 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720460 4867 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720466 4867 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720472 4867 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720478 4867 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720484 4867 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720490 4867 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720497 4867 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720522 4867 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720528 4867 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720534 4867 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720539 4867 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720544 4867 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720549 4867 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720554 4867 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720559 4867 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720564 4867 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720569 4867 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720574 4867 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720578 4867 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720583 4867 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720588 4867 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720594 4867 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720600 4867 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720608 4867 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720614 4867 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720628 4867 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720634 4867 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720639 4867 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720643 4867 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720648 4867 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720653 4867 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720658 4867 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720663 4867 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720667 4867 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720672 4867 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720677 4867 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.720682 4867 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.720689 4867 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.721940 4867 server.go:940] "Client rotation is on, will bootstrap in background" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.745186 4867 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.745352 4867 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.747725 4867 server.go:997] "Starting client certificate rotation" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.747778 4867 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.749836 4867 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-11 09:20:58.623093149 +0000 UTC Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.749958 4867 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.818179 4867 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.822413 4867 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 14 04:09:28 crc kubenswrapper[4867]: E0214 04:09:28.825319 4867 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.848247 4867 log.go:25] "Validated CRI v1 runtime API" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.886727 4867 log.go:25] "Validated CRI v1 image API" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.889346 4867 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.896324 4867 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-14-04-04-32-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.896365 4867 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.910809 4867 manager.go:217] Machine: {Timestamp:2026-02-14 04:09:28.909132867 +0000 UTC m=+0.990070201 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:1382a0d3-8d29-4f25-bc2c-dc46ad541396 BootID:148e1364-0af4-4e1f-ae72-52166d888ddc Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:89:bf:48 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:89:bf:48 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:f1:70:ad Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:61:fa:b2 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:a1:f2:e6 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:c2:9c:e1 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:36:69:22:d8:01:3e Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:62:0b:66:1b:d9:61 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.911360 4867 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.911555 4867 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.911832 4867 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.912052 4867 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.912087 4867 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.912277 4867 topology_manager.go:138] "Creating topology manager with none policy" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.912287 4867 container_manager_linux.go:303] "Creating device plugin manager" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.912734 4867 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.915431 4867 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.916460 4867 state_mem.go:36] "Initialized new in-memory state store" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.916858 4867 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.920202 4867 kubelet.go:418] "Attempting to sync node with API server" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.920226 4867 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.920242 4867 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.920254 4867 kubelet.go:324] "Adding apiserver pod source" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.920267 4867 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.924055 4867 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.926652 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.926733 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:28 crc kubenswrapper[4867]: E0214 04:09:28.926887 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 14 04:09:28 crc kubenswrapper[4867]: E0214 04:09:28.926898 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.927078 4867 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.929768 4867 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.931480 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.931546 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.931562 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.931575 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.931598 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.931613 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.931628 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.931651 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.931668 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.931684 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.931702 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.931716 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.932641 4867 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.933142 4867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.933359 4867 server.go:1280] "Started kubelet" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.933644 4867 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.933686 4867 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.934653 4867 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 14 04:09:28 crc systemd[1]: Started Kubernetes Kubelet. Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.936005 4867 server.go:460] "Adding debug handlers to kubelet server" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.938054 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.938099 4867 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.938116 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 16:13:30.519114575 +0000 UTC Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.938318 4867 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.938339 4867 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.938376 4867 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 14 04:09:28 crc kubenswrapper[4867]: E0214 04:09:28.938527 4867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 14 04:09:28 crc kubenswrapper[4867]: E0214 04:09:28.938968 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="200ms" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.939259 4867 factory.go:55] Registering systemd factory Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.939282 4867 factory.go:221] Registration of the systemd container factory successfully Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.939466 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:28 crc kubenswrapper[4867]: E0214 04:09:28.940128 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.940815 4867 factory.go:153] Registering CRI-O factory Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.940839 4867 factory.go:221] Registration of the crio container factory successfully Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.940910 4867 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.940934 4867 factory.go:103] Registering Raw factory Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.940954 4867 manager.go:1196] Started watching for new ooms in manager Feb 14 04:09:28 crc kubenswrapper[4867]: E0214 04:09:28.939580 4867 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.113:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18940178218205da default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-14 04:09:28.933320154 +0000 UTC m=+1.014257508,LastTimestamp:2026-02-14 04:09:28.933320154 +0000 UTC m=+1.014257508,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.943107 4867 manager.go:319] Starting recovery of all containers Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947052 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947125 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947142 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947166 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947183 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947208 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947224 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947236 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947258 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947272 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947290 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947307 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947327 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947343 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947364 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947379 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947401 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947414 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947428 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947449 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947463 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947481 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947498 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947532 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947553 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947564 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947585 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947599 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947622 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947637 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947659 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947675 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947698 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947733 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947754 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947780 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947798 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947820 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947837 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947857 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947947 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.947968 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.949857 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950459 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950494 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950530 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950545 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950560 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950575 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950591 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950605 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950621 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950647 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950666 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950685 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950703 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950719 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950734 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950749 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950763 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950776 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950792 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950807 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.950823 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954652 4867 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954727 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954748 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954776 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954800 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954815 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954830 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954843 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954857 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954871 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954885 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954900 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954912 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954925 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954968 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954982 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.954997 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955014 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955031 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955047 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955069 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955086 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955102 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955119 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955140 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955184 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955197 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955210 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955221 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955250 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955263 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955274 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955320 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955333 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955347 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955359 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955372 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955386 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955399 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955411 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955428 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955474 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.955493 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956667 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956732 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956748 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956764 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956779 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956793 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956807 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956822 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956836 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956849 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956863 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956880 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956895 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956938 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956951 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956963 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956975 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.956989 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957001 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957013 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957026 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957038 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957050 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957070 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957082 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957097 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957116 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957136 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957153 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957170 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957182 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957248 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957269 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957286 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957299 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957312 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957329 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957342 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957353 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957367 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957378 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957391 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957404 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957415 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957427 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957439 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957453 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957464 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957477 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957490 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957525 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957541 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957554 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957572 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957590 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957611 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957627 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957642 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957658 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957672 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957688 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957704 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957718 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957731 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957745 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957762 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957774 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957789 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957803 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957816 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957829 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957842 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957856 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957870 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957885 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957898 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957911 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957926 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957938 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957951 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.957964 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958024 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958037 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958054 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958068 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958091 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958108 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958122 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958136 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958152 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958168 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958182 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958196 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958214 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958230 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958242 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958256 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958272 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958287 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958299 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958312 4867 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958323 4867 reconstruct.go:97] "Volume reconstruction finished" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.958333 4867 reconciler.go:26] "Reconciler: start to sync state" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.965183 4867 manager.go:324] Recovery completed Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.974922 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.977037 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.977072 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.977088 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.977908 4867 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.977926 4867 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.977946 4867 state_mem.go:36] "Initialized new in-memory state store" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.993403 4867 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.995844 4867 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.995924 4867 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.995979 4867 kubelet.go:2335] "Starting kubelet main sync loop" Feb 14 04:09:28 crc kubenswrapper[4867]: E0214 04:09:28.996050 4867 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.997582 4867 policy_none.go:49] "None policy: Start" Feb 14 04:09:28 crc kubenswrapper[4867]: W0214 04:09:28.997972 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:28 crc kubenswrapper[4867]: E0214 04:09:28.998050 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.998690 4867 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 14 04:09:28 crc kubenswrapper[4867]: I0214 04:09:28.998716 4867 state_mem.go:35] "Initializing new in-memory state store" Feb 14 04:09:29 crc kubenswrapper[4867]: E0214 04:09:29.039007 4867 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.055783 4867 manager.go:334] "Starting Device Plugin manager" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.055942 4867 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.055959 4867 server.go:79] "Starting device plugin registration server" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.056591 4867 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.056614 4867 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.057093 4867 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.057182 4867 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.057195 4867 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 14 04:09:29 crc kubenswrapper[4867]: E0214 04:09:29.067957 4867 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.097299 4867 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.097406 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.099173 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.099225 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.099236 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.099392 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.099721 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.099815 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.100466 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.100527 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.100543 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.100726 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.100891 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.100914 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.100922 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.100890 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.101025 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.101869 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.101898 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.101909 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.101938 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.101955 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.101965 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.102029 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.102237 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.102268 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.103018 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.103037 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.103045 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.103154 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.103164 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.103171 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.103532 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.104968 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.105019 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.106416 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.106458 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.106469 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.106733 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.106783 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.108353 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.108397 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.108412 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.109089 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.109122 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.109137 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:29 crc kubenswrapper[4867]: E0214 04:09:29.140286 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="400ms" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.157255 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.158681 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.158742 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.158757 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.158801 4867 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 04:09:29 crc kubenswrapper[4867]: E0214 04:09:29.159498 4867 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.113:6443: connect: connection refused" node="crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.162699 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.162749 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.162781 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.162804 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.162830 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.162850 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.162956 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.162981 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.163021 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.163045 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.163067 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.163087 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.163109 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.163128 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.163208 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.264642 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.264706 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.264755 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.264777 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.264799 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.264822 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.264844 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.264866 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.264888 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.264904 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.264891 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.264910 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.264984 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.264985 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.265013 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.265022 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.264960 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.265037 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.265065 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.265143 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.265154 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.265171 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.265154 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.265191 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.265200 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.265222 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.265230 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.265288 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.265359 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.265291 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.359663 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.361044 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.361109 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.361129 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.361164 4867 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 04:09:29 crc kubenswrapper[4867]: E0214 04:09:29.361755 4867 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.113:6443: connect: connection refused" node="crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.424430 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.430237 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.448002 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.466903 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.470970 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 14 04:09:29 crc kubenswrapper[4867]: W0214 04:09:29.473198 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-46e9b8e22e2be7f717536dffa7529d7a97bdffebd9250ddec5e65d5d5f016d77 WatchSource:0}: Error finding container 46e9b8e22e2be7f717536dffa7529d7a97bdffebd9250ddec5e65d5d5f016d77: Status 404 returned error can't find the container with id 46e9b8e22e2be7f717536dffa7529d7a97bdffebd9250ddec5e65d5d5f016d77 Feb 14 04:09:29 crc kubenswrapper[4867]: W0214 04:09:29.473777 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-4a1bb8a3dfe17859d34e5eed972a7741459e836f78cc358592caf6be6c31f172 WatchSource:0}: Error finding container 4a1bb8a3dfe17859d34e5eed972a7741459e836f78cc358592caf6be6c31f172: Status 404 returned error can't find the container with id 4a1bb8a3dfe17859d34e5eed972a7741459e836f78cc358592caf6be6c31f172 Feb 14 04:09:29 crc kubenswrapper[4867]: W0214 04:09:29.487608 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-5160e8d4ce2a4297674e730207cdfd905b5e676ac1b9b9c937d380dd67ad9e6d WatchSource:0}: Error finding container 5160e8d4ce2a4297674e730207cdfd905b5e676ac1b9b9c937d380dd67ad9e6d: Status 404 returned error can't find the container with id 5160e8d4ce2a4297674e730207cdfd905b5e676ac1b9b9c937d380dd67ad9e6d Feb 14 04:09:29 crc kubenswrapper[4867]: W0214 04:09:29.489465 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-944a9baef757973ab049cf70e903aa7f527656f3cfe6a2b91bbe6c555afd69e7 WatchSource:0}: Error finding container 944a9baef757973ab049cf70e903aa7f527656f3cfe6a2b91bbe6c555afd69e7: Status 404 returned error can't find the container with id 944a9baef757973ab049cf70e903aa7f527656f3cfe6a2b91bbe6c555afd69e7 Feb 14 04:09:29 crc kubenswrapper[4867]: E0214 04:09:29.541409 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="800ms" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.762267 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.763947 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.763986 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.763995 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.764016 4867 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 04:09:29 crc kubenswrapper[4867]: E0214 04:09:29.764372 4867 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.113:6443: connect: connection refused" node="crc" Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.934231 4867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:29 crc kubenswrapper[4867]: I0214 04:09:29.939218 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 13:37:34.555451055 +0000 UTC Feb 14 04:09:30 crc kubenswrapper[4867]: I0214 04:09:30.000898 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d5df1d6f504d3df72192b61ee87a9edaf65546935df593f3d941db9b1a30220b"} Feb 14 04:09:30 crc kubenswrapper[4867]: I0214 04:09:30.002769 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5160e8d4ce2a4297674e730207cdfd905b5e676ac1b9b9c937d380dd67ad9e6d"} Feb 14 04:09:30 crc kubenswrapper[4867]: I0214 04:09:30.004411 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"46e9b8e22e2be7f717536dffa7529d7a97bdffebd9250ddec5e65d5d5f016d77"} Feb 14 04:09:30 crc kubenswrapper[4867]: I0214 04:09:30.005850 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4a1bb8a3dfe17859d34e5eed972a7741459e836f78cc358592caf6be6c31f172"} Feb 14 04:09:30 crc kubenswrapper[4867]: I0214 04:09:30.007557 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"944a9baef757973ab049cf70e903aa7f527656f3cfe6a2b91bbe6c555afd69e7"} Feb 14 04:09:30 crc kubenswrapper[4867]: W0214 04:09:30.049616 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:30 crc kubenswrapper[4867]: E0214 04:09:30.049701 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 14 04:09:30 crc kubenswrapper[4867]: W0214 04:09:30.156562 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:30 crc kubenswrapper[4867]: E0214 04:09:30.156645 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 14 04:09:30 crc kubenswrapper[4867]: E0214 04:09:30.343628 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="1.6s" Feb 14 04:09:30 crc kubenswrapper[4867]: W0214 04:09:30.386478 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:30 crc kubenswrapper[4867]: E0214 04:09:30.386699 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 14 04:09:30 crc kubenswrapper[4867]: W0214 04:09:30.407906 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:30 crc kubenswrapper[4867]: E0214 04:09:30.408044 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 14 04:09:30 crc kubenswrapper[4867]: I0214 04:09:30.565432 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:30 crc kubenswrapper[4867]: I0214 04:09:30.567623 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:30 crc kubenswrapper[4867]: I0214 04:09:30.567684 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:30 crc kubenswrapper[4867]: I0214 04:09:30.567704 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:30 crc kubenswrapper[4867]: I0214 04:09:30.567745 4867 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 04:09:30 crc kubenswrapper[4867]: E0214 04:09:30.568428 4867 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.113:6443: connect: connection refused" node="crc" Feb 14 04:09:30 crc kubenswrapper[4867]: I0214 04:09:30.935042 4867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:30 crc kubenswrapper[4867]: I0214 04:09:30.940201 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 11:21:01.666447061 +0000 UTC Feb 14 04:09:32 crc kubenswrapper[4867]: I0214 04:09:32.079468 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 21:50:28.755568387 +0000 UTC Feb 14 04:09:32 crc kubenswrapper[4867]: I0214 04:09:32.079931 4867 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 14 04:09:32 crc kubenswrapper[4867]: W0214 04:09:32.079924 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:32 crc kubenswrapper[4867]: E0214 04:09:32.080010 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 14 04:09:32 crc kubenswrapper[4867]: E0214 04:09:32.080003 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="3.2s" Feb 14 04:09:32 crc kubenswrapper[4867]: I0214 04:09:32.081097 4867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:32 crc kubenswrapper[4867]: E0214 04:09:32.082343 4867 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 14 04:09:32 crc kubenswrapper[4867]: I0214 04:09:32.169242 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:32 crc kubenswrapper[4867]: I0214 04:09:32.170988 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:32 crc kubenswrapper[4867]: I0214 04:09:32.171031 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:32 crc kubenswrapper[4867]: I0214 04:09:32.171045 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:32 crc kubenswrapper[4867]: I0214 04:09:32.171072 4867 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 04:09:32 crc kubenswrapper[4867]: E0214 04:09:32.171775 4867 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.113:6443: connect: connection refused" node="crc" Feb 14 04:09:32 crc kubenswrapper[4867]: E0214 04:09:32.351856 4867 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.113:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18940178218205da default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-14 04:09:28.933320154 +0000 UTC m=+1.014257508,LastTimestamp:2026-02-14 04:09:28.933320154 +0000 UTC m=+1.014257508,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 14 04:09:32 crc kubenswrapper[4867]: W0214 04:09:32.554391 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:32 crc kubenswrapper[4867]: E0214 04:09:32.554870 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 14 04:09:32 crc kubenswrapper[4867]: I0214 04:09:32.934299 4867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.079831 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 14:40:52.820159508 +0000 UTC Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.090702 4867 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302" exitCode=0 Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.090846 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302"} Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.090882 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.091997 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.092044 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.092061 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.092228 4867 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6e4d315b1c424660a2a02ab7882b4d25e0baa2407cbcc9efab29adf052733231" exitCode=0 Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.092303 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6e4d315b1c424660a2a02ab7882b4d25e0baa2407cbcc9efab29adf052733231"} Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.092333 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.093189 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.093212 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.093222 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.094215 4867 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0" exitCode=0 Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.094260 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0"} Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.094323 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.094580 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.094918 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.094942 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.094952 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.095364 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.095394 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.095402 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.097229 4867 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="313dd94a6a60cea26237126b4d80e162ff2866b335e74ba876fa919f2950922e" exitCode=0 Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.097349 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"313dd94a6a60cea26237126b4d80e162ff2866b335e74ba876fa919f2950922e"} Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.097483 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.099701 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.099730 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.099748 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.104438 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839"} Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.104477 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849"} Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.104488 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6"} Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.104498 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a"} Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.104595 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:33 crc kubenswrapper[4867]: W0214 04:09:33.105359 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:33 crc kubenswrapper[4867]: E0214 04:09:33.105473 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.106016 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.106056 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.106074 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:33 crc kubenswrapper[4867]: W0214 04:09:33.155240 4867 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:33 crc kubenswrapper[4867]: E0214 04:09:33.155354 4867 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.113:6443: connect: connection refused" logger="UnhandledError" Feb 14 04:09:33 crc kubenswrapper[4867]: I0214 04:09:33.934606 4867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.025044 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.037446 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.080187 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 02:38:52.508321325 +0000 UTC Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.112277 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7"} Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.112329 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc"} Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.112350 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243"} Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.112366 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe"} Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.114409 4867 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6718fb3f6cc2532e0ed35f4a37eb39738cd75a5f20f85e778dec867a620eba6f" exitCode=0 Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.114453 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6718fb3f6cc2532e0ed35f4a37eb39738cd75a5f20f85e778dec867a620eba6f"} Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.114563 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.115632 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.115673 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.115690 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.117058 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8f0b9cac3faa5bfffa911cb16b70fa88a320b7bd9314d7a0ee0732b2a57afb90"} Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.117100 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.117103 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c54a1f41a2a0e8fa5eae1575fc40b6f3240fe6ea8cafe6fd89a64e092e5b4602"} Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.117207 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"61da3ab9eb87eb886d6bdf805db38bcabc3db4334167f9e28fd6144269a76515"} Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.118765 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.118785 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.118797 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.121035 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"62a23e7ed290c1546350cfd89f40731062a0bbfc60ee74489cb0fc243bb8187f"} Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.121068 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.121200 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.122184 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.122215 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.122242 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.122191 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.122347 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.122359 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.182166 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.189500 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:09:34 crc kubenswrapper[4867]: I0214 04:09:34.934608 4867 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.113:6443: connect: connection refused Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.080765 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 00:02:31.03602251 +0000 UTC Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.127621 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687"} Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.127673 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.128644 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.128675 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.128687 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.130535 4867 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="185d95c4c216a23ddee54c001dee313a17659c22037a5f60772d4449bd8fdd08" exitCode=0 Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.130604 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.130660 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.130683 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"185d95c4c216a23ddee54c001dee313a17659c22037a5f60772d4449bd8fdd08"} Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.130709 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.130791 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.130854 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.131338 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.131379 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.131398 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.131749 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.131782 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.131794 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.131869 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.131904 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.131918 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.131999 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.132022 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.132034 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.372725 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.374696 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.374742 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.374758 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.374790 4867 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 04:09:35 crc kubenswrapper[4867]: I0214 04:09:35.753270 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.081167 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 20:19:52.696512746 +0000 UTC Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.136456 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4f07e13016eff40608d9a7f5dbdbd6e4faa7b21b965957c062bfd1c40b04d582"} Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.136498 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.136528 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c647364c951a6adef887ffa61edec540e1ba09f957cffaf60aa4e2fb6ecaa22d"} Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.136543 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1680b0766cf32cd9af06a1636274ebdc0e1a0eb1ef8ebf2dd5af50a426593936"} Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.136553 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5777a20697086ac1eaf7dd01c471658a6ea96751fc9184d7bc2597777d86949a"} Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.136586 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.136599 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.136600 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.137280 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.137298 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.137306 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.137938 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.137958 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.137965 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.138012 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.137982 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.138060 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:36 crc kubenswrapper[4867]: I0214 04:09:36.390494 4867 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.025538 4867 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.025646 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.082127 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 23:49:17.805640143 +0000 UTC Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.143771 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"85486406cb9ccb97ccb382e44c3c4372c54609d367aeec7a04ddfa06424c9cd6"} Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.143806 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.143900 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.144627 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.144666 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.144679 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.145069 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.145099 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.145108 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.995973 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.996208 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.997743 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.997810 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:37 crc kubenswrapper[4867]: I0214 04:09:37.997829 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:38 crc kubenswrapper[4867]: I0214 04:09:38.055837 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 14 04:09:38 crc kubenswrapper[4867]: I0214 04:09:38.083050 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 17:20:31.753755814 +0000 UTC Feb 14 04:09:38 crc kubenswrapper[4867]: I0214 04:09:38.117562 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:38 crc kubenswrapper[4867]: I0214 04:09:38.146232 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:38 crc kubenswrapper[4867]: I0214 04:09:38.146420 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:38 crc kubenswrapper[4867]: I0214 04:09:38.147961 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:38 crc kubenswrapper[4867]: I0214 04:09:38.148001 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:38 crc kubenswrapper[4867]: I0214 04:09:38.148012 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:38 crc kubenswrapper[4867]: I0214 04:09:38.147999 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:38 crc kubenswrapper[4867]: I0214 04:09:38.148047 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:38 crc kubenswrapper[4867]: I0214 04:09:38.148064 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:39 crc kubenswrapper[4867]: E0214 04:09:39.068170 4867 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 14 04:09:39 crc kubenswrapper[4867]: I0214 04:09:39.083271 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 12:14:43.607269607 +0000 UTC Feb 14 04:09:39 crc kubenswrapper[4867]: I0214 04:09:39.148826 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:39 crc kubenswrapper[4867]: I0214 04:09:39.150070 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:39 crc kubenswrapper[4867]: I0214 04:09:39.150130 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:39 crc kubenswrapper[4867]: I0214 04:09:39.150150 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:39 crc kubenswrapper[4867]: I0214 04:09:39.229625 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:39 crc kubenswrapper[4867]: I0214 04:09:39.229864 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:39 crc kubenswrapper[4867]: I0214 04:09:39.231184 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:39 crc kubenswrapper[4867]: I0214 04:09:39.231257 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:39 crc kubenswrapper[4867]: I0214 04:09:39.231281 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:40 crc kubenswrapper[4867]: I0214 04:09:40.058951 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 14 04:09:40 crc kubenswrapper[4867]: I0214 04:09:40.084201 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 11:12:37.165607116 +0000 UTC Feb 14 04:09:40 crc kubenswrapper[4867]: I0214 04:09:40.151217 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:40 crc kubenswrapper[4867]: I0214 04:09:40.152677 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:40 crc kubenswrapper[4867]: I0214 04:09:40.152724 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:40 crc kubenswrapper[4867]: I0214 04:09:40.152740 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:41 crc kubenswrapper[4867]: I0214 04:09:41.084688 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 09:52:27.195226736 +0000 UTC Feb 14 04:09:42 crc kubenswrapper[4867]: I0214 04:09:42.085590 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 10:46:17.518804751 +0000 UTC Feb 14 04:09:43 crc kubenswrapper[4867]: I0214 04:09:43.086776 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 01:43:15.672784504 +0000 UTC Feb 14 04:09:44 crc kubenswrapper[4867]: I0214 04:09:44.042566 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:09:44 crc kubenswrapper[4867]: I0214 04:09:44.042693 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:44 crc kubenswrapper[4867]: I0214 04:09:44.043946 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:44 crc kubenswrapper[4867]: I0214 04:09:44.044027 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:44 crc kubenswrapper[4867]: I0214 04:09:44.044051 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:44 crc kubenswrapper[4867]: I0214 04:09:44.087243 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 18:37:33.431158703 +0000 UTC Feb 14 04:09:45 crc kubenswrapper[4867]: I0214 04:09:45.088156 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 11:48:46.714722648 +0000 UTC Feb 14 04:09:45 crc kubenswrapper[4867]: E0214 04:09:45.281271 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Feb 14 04:09:45 crc kubenswrapper[4867]: E0214 04:09:45.375590 4867 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Feb 14 04:09:45 crc kubenswrapper[4867]: I0214 04:09:45.434611 4867 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Feb 14 04:09:45 crc kubenswrapper[4867]: I0214 04:09:45.434690 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 14 04:09:45 crc kubenswrapper[4867]: I0214 04:09:45.441938 4867 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Feb 14 04:09:45 crc kubenswrapper[4867]: I0214 04:09:45.441984 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 14 04:09:46 crc kubenswrapper[4867]: I0214 04:09:46.088315 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 00:53:14.428551025 +0000 UTC Feb 14 04:09:46 crc kubenswrapper[4867]: I0214 04:09:46.170175 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 14 04:09:46 crc kubenswrapper[4867]: I0214 04:09:46.172002 4867 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687" exitCode=255 Feb 14 04:09:46 crc kubenswrapper[4867]: I0214 04:09:46.172052 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687"} Feb 14 04:09:46 crc kubenswrapper[4867]: I0214 04:09:46.172204 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:46 crc kubenswrapper[4867]: I0214 04:09:46.173198 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:46 crc kubenswrapper[4867]: I0214 04:09:46.173241 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:46 crc kubenswrapper[4867]: I0214 04:09:46.173253 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:46 crc kubenswrapper[4867]: I0214 04:09:46.173864 4867 scope.go:117] "RemoveContainer" containerID="b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687" Feb 14 04:09:47 crc kubenswrapper[4867]: I0214 04:09:47.026343 4867 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 04:09:47 crc kubenswrapper[4867]: I0214 04:09:47.026454 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 04:09:47 crc kubenswrapper[4867]: I0214 04:09:47.088959 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 21:04:01.589474723 +0000 UTC Feb 14 04:09:47 crc kubenswrapper[4867]: I0214 04:09:47.176796 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 14 04:09:47 crc kubenswrapper[4867]: I0214 04:09:47.178905 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48"} Feb 14 04:09:47 crc kubenswrapper[4867]: I0214 04:09:47.179105 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:47 crc kubenswrapper[4867]: I0214 04:09:47.180061 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:47 crc kubenswrapper[4867]: I0214 04:09:47.180105 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:47 crc kubenswrapper[4867]: I0214 04:09:47.180121 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:48 crc kubenswrapper[4867]: I0214 04:09:48.084761 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 14 04:09:48 crc kubenswrapper[4867]: I0214 04:09:48.084949 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:48 crc kubenswrapper[4867]: I0214 04:09:48.086087 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:48 crc kubenswrapper[4867]: I0214 04:09:48.086126 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:48 crc kubenswrapper[4867]: I0214 04:09:48.086138 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:48 crc kubenswrapper[4867]: I0214 04:09:48.089051 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 01:21:56.073156678 +0000 UTC Feb 14 04:09:48 crc kubenswrapper[4867]: I0214 04:09:48.097364 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 14 04:09:48 crc kubenswrapper[4867]: I0214 04:09:48.182099 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:48 crc kubenswrapper[4867]: I0214 04:09:48.182993 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:48 crc kubenswrapper[4867]: I0214 04:09:48.183019 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:48 crc kubenswrapper[4867]: I0214 04:09:48.183029 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:49 crc kubenswrapper[4867]: E0214 04:09:49.068272 4867 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 14 04:09:49 crc kubenswrapper[4867]: I0214 04:09:49.089388 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 19:37:35.822211374 +0000 UTC Feb 14 04:09:49 crc kubenswrapper[4867]: I0214 04:09:49.235854 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:49 crc kubenswrapper[4867]: I0214 04:09:49.236317 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:49 crc kubenswrapper[4867]: I0214 04:09:49.236472 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:49 crc kubenswrapper[4867]: I0214 04:09:49.237891 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:49 crc kubenswrapper[4867]: I0214 04:09:49.237933 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:49 crc kubenswrapper[4867]: I0214 04:09:49.237956 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:49 crc kubenswrapper[4867]: I0214 04:09:49.245386 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.089846 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 06:07:24.989012886 +0000 UTC Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.187679 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.188820 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.188871 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.188889 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.449416 4867 trace.go:236] Trace[2028999215]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Feb-2026 04:09:38.273) (total time: 12175ms): Feb 14 04:09:50 crc kubenswrapper[4867]: Trace[2028999215]: ---"Objects listed" error: 12175ms (04:09:50.449) Feb 14 04:09:50 crc kubenswrapper[4867]: Trace[2028999215]: [12.175665655s] [12.175665655s] END Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.449468 4867 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.450932 4867 trace.go:236] Trace[878925746]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Feb-2026 04:09:36.658) (total time: 13792ms): Feb 14 04:09:50 crc kubenswrapper[4867]: Trace[878925746]: ---"Objects listed" error: 13792ms (04:09:50.450) Feb 14 04:09:50 crc kubenswrapper[4867]: Trace[878925746]: [13.792414679s] [13.792414679s] END Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.450998 4867 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.453756 4867 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.454124 4867 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.454299 4867 trace.go:236] Trace[148007666]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Feb-2026 04:09:37.974) (total time: 12480ms): Feb 14 04:09:50 crc kubenswrapper[4867]: Trace[148007666]: ---"Objects listed" error: 12479ms (04:09:50.453) Feb 14 04:09:50 crc kubenswrapper[4867]: Trace[148007666]: [12.480125756s] [12.480125756s] END Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.454393 4867 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.455720 4867 trace.go:236] Trace[1976206573]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (14-Feb-2026 04:09:36.799) (total time: 13655ms): Feb 14 04:09:50 crc kubenswrapper[4867]: Trace[1976206573]: ---"Objects listed" error: 13655ms (04:09:50.455) Feb 14 04:09:50 crc kubenswrapper[4867]: Trace[1976206573]: [13.65576066s] [13.65576066s] END Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.455762 4867 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.483898 4867 csr.go:261] certificate signing request csr-kn5td is approved, waiting to be issued Feb 14 04:09:50 crc kubenswrapper[4867]: I0214 04:09:50.497449 4867 csr.go:257] certificate signing request csr-kn5td is issued Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.089889 4867 apiserver.go:52] "Watching apiserver" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.090058 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 12:25:26.44749354 +0000 UTC Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.100920 4867 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.101569 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-l6v69","openshift-machine-config-operator/machine-config-daemon-4s95t","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.102345 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.102430 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.102485 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.102736 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-l6v69" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.103210 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.103348 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.103398 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.103439 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.102358 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.103678 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.103741 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.104841 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.105568 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.105722 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.105954 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.106185 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.106752 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-fl729"] Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.107401 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-9st5b"] Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.107886 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6nndn"] Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.107903 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.108025 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.108099 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.108295 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.108442 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.108571 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.108608 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.108917 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.108970 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.109273 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.109318 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.109435 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.108932 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.109447 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.109600 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.110956 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.111298 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.111388 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.111525 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.114161 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.114171 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.114603 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.114474 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.114774 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.114787 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.114953 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.115164 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.115170 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.115260 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.137591 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.139474 4867 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.157967 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158016 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158039 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158059 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158081 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158099 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158116 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158136 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158156 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158174 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158190 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158206 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158223 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158239 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158255 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158276 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158281 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158292 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158364 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158388 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158408 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158430 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158448 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158466 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158487 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158533 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158551 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158568 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158585 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158602 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158618 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158638 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158655 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158672 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158689 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158714 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158735 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158752 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158768 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158785 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158803 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158829 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158848 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158865 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158935 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158958 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158975 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.158993 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159010 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159028 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159047 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159051 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159067 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159257 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159313 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159393 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159421 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159455 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159555 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159628 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159687 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159743 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159799 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159852 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159910 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159920 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159194 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.159993 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.160057 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.160115 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.160151 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.160174 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.160234 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.160304 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.160373 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.160444 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.160310 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.169524 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.160421 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.160535 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:09:51.660492055 +0000 UTC m=+23.741429359 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.160715 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.160753 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.160780 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.169616 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.160907 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.160889 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.169653 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.161135 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.161145 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.161156 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.169692 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.161465 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.161481 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.162441 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.169857 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.169883 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.169937 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.162545 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.162941 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170016 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.163119 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.163197 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.163313 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.163432 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.163706 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.163710 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.163760 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.163995 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.164254 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.164352 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.164548 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.164568 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.164668 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.165246 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.165261 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.165618 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.165667 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.165851 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.165396 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.166103 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.166202 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.166660 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.166697 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.166986 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.167434 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.167948 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.168058 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.168274 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.168455 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.168487 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.168722 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.168722 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.168548 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.169239 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.169359 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.169573 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.160974 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.169767 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170365 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170368 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170386 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170223 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.169953 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170477 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170543 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170569 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170587 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170605 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170633 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170650 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170666 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170687 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170705 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170720 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170736 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170724 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.170824 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.171103 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.171105 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.171165 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.171441 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.171449 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.171458 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.171922 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.171920 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.171934 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.171990 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172200 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.171012 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172305 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172344 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172420 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172483 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172597 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172659 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172662 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172711 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172786 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172807 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172826 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172845 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172865 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172867 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172887 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172907 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172928 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172948 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172970 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.172987 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173007 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173026 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173044 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173063 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173079 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173116 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173137 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173154 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173175 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173198 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173193 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173219 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173239 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173236 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173266 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173287 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173307 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173329 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173344 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173354 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173413 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173362 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173460 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173893 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173936 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.173974 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174016 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174078 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174135 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174190 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174241 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174280 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174316 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174350 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174399 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174440 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174493 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174620 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174714 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174776 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174814 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174853 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174889 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174923 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174958 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174993 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175026 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175060 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175103 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175145 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175182 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175220 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175262 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175296 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175335 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175371 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175409 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175442 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175474 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175543 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175580 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175615 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175647 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175687 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175721 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175769 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175821 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175860 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175895 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175934 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175972 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.176023 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.176071 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.176119 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.176168 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.176222 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.176275 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.176325 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.176367 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174016 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174063 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174288 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174293 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174637 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174655 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.174659 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175076 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175239 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175485 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175585 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175548 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.175851 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.176247 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.176401 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.176419 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.177567 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.177596 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.177635 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.177685 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.177749 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.177782 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.177817 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.177872 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.177891 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.177909 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.177946 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.177979 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178011 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178048 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178081 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178117 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178150 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178183 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178195 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178216 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178313 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64stb\" (UniqueName: \"kubernetes.io/projected/2afb01bb-2288-4e50-aa66-3e5f2685af58-kube-api-access-64stb\") pod \"node-resolver-l6v69\" (UID: \"2afb01bb-2288-4e50-aa66-3e5f2685af58\") " pod="openshift-dns/node-resolver-l6v69" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178361 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-systemd-units\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178397 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-log-socket\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178425 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178430 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-run-multus-certs\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178501 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178562 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-slash\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178589 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-systemd\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178743 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178782 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d645541b-4940-4e53-a506-1b42bd296dfb-system-cni-dir\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178768 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178820 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-run-ovn-kubernetes\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178847 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-os-release\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178871 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-etc-kubernetes\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178898 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d645541b-4940-4e53-a506-1b42bd296dfb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178911 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178933 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178965 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-kubelet\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.178993 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-cni-netd\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179023 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fb77d03e-6ead-48b5-a96a-db4cbd540192-cni-binary-copy\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179047 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-multus-conf-dir\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179077 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brktz\" (UniqueName: \"kubernetes.io/projected/5992e46c-bce7-4b9f-82f2-c7ffb93286cd-kube-api-access-brktz\") pod \"machine-config-daemon-4s95t\" (UID: \"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\") " pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179112 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179137 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-var-lib-cni-bin\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179158 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gznnx\" (UniqueName: \"kubernetes.io/projected/fb77d03e-6ead-48b5-a96a-db4cbd540192-kube-api-access-gznnx\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179179 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179197 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-multus-socket-dir-parent\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179214 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2afb01bb-2288-4e50-aa66-3e5f2685af58-hosts-file\") pod \"node-resolver-l6v69\" (UID: \"2afb01bb-2288-4e50-aa66-3e5f2685af58\") " pod="openshift-dns/node-resolver-l6v69" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179242 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179269 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-etc-openvswitch\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179292 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-ovnkube-script-lib\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179318 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-var-lib-kubelet\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179340 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179364 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179388 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5992e46c-bce7-4b9f-82f2-c7ffb93286cd-proxy-tls\") pod \"machine-config-daemon-4s95t\" (UID: \"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\") " pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179409 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd8lr\" (UniqueName: \"kubernetes.io/projected/d645541b-4940-4e53-a506-1b42bd296dfb-kube-api-access-nd8lr\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179429 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-var-lib-openvswitch\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179452 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179477 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-multus-cni-dir\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179522 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179549 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d645541b-4940-4e53-a506-1b42bd296dfb-os-release\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179818 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-cni-bin\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179836 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-run-k8s-cni-cncf-io\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179854 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-openvswitch\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179925 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179947 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-cnibin\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179970 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fb77d03e-6ead-48b5-a96a-db4cbd540192-multus-daemon-config\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.179998 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180023 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180047 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-node-log\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180068 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-ovnkube-config\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180090 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-system-cni-dir\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180108 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-run-netns\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180125 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-var-lib-cni-multus\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180145 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180161 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5992e46c-bce7-4b9f-82f2-c7ffb93286cd-rootfs\") pod \"machine-config-daemon-4s95t\" (UID: \"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\") " pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180180 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5992e46c-bce7-4b9f-82f2-c7ffb93286cd-mcd-auth-proxy-config\") pod \"machine-config-daemon-4s95t\" (UID: \"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\") " pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180196 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-hostroot\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180215 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180233 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d645541b-4940-4e53-a506-1b42bd296dfb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180252 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-run-netns\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180273 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/34391a30-5865-46e9-af5f-705cc3b11fba-ovn-node-metrics-cert\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180279 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180295 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmqj7\" (UniqueName: \"kubernetes.io/projected/34391a30-5865-46e9-af5f-705cc3b11fba-kube-api-access-kmqj7\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180318 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d645541b-4940-4e53-a506-1b42bd296dfb-cnibin\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180340 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d645541b-4940-4e53-a506-1b42bd296dfb-cni-binary-copy\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180362 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-ovn\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181010 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-env-overrides\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181050 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181205 4867 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181225 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181242 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181257 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181273 4867 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181292 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181307 4867 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181322 4867 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181338 4867 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181354 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181389 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181406 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181422 4867 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181437 4867 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181455 4867 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181469 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181486 4867 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181498 4867 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181536 4867 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181550 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181563 4867 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181592 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181607 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181621 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181634 4867 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181649 4867 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181664 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181679 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181692 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181707 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181756 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181774 4867 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181787 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181825 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181840 4867 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181855 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181894 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181909 4867 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181926 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181940 4867 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181990 4867 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182007 4867 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182020 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182072 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182110 4867 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182124 4867 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182140 4867 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182177 4867 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182190 4867 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182204 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182216 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182230 4867 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182246 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182261 4867 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182274 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182288 4867 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182303 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182316 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182329 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182342 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182356 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182350 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180411 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.180929 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.176476 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.176934 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.177093 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.177121 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181164 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181614 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182652 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181684 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.181719 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182324 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182350 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.176423 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182814 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.182925 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.183014 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.183352 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.183949 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.184245 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.184309 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.184482 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.184545 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.184555 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.184669 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.184788 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.184974 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.185053 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.185202 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.185888 4867 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.185957 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 04:09:51.685940085 +0000 UTC m=+23.766877399 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.186901 4867 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.186980 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 04:09:51.686967062 +0000 UTC m=+23.767904606 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.187629 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.187730 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.187898 4867 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.187990 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.188673 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.188892 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.189136 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.189761 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.189731 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.189872 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.190339 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.190363 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.191011 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192182 4867 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192267 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192310 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192474 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192495 4867 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192533 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192552 4867 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192572 4867 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192590 4867 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192605 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192619 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192634 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192652 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192667 4867 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192681 4867 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192695 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192709 4867 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192725 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192743 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192867 4867 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192898 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.192910 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193042 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193060 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193080 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193101 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193120 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193137 4867 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193151 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193164 4867 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193178 4867 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193195 4867 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193208 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193222 4867 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193239 4867 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193257 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193275 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193611 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.193485 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.194233 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.194189 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.194420 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.194616 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.195249 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.196683 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.200703 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.201220 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.206124 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.206351 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.207040 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.207072 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.207857 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.207888 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.207886 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.207886 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.207907 4867 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.208031 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 04:09:51.708003124 +0000 UTC m=+23.788940468 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.208051 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.208240 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.208256 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.208271 4867 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.208329 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 04:09:51.708306862 +0000 UTC m=+23.789244176 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.208744 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.208864 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.209166 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.209282 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.209674 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.211471 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.212419 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.215292 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.215827 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.215869 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.216038 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.216979 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.217795 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.217845 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.218292 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.218442 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.219092 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.219160 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.219663 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.220162 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.220352 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.220553 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.224113 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.224293 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.224536 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.225823 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.226627 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.226655 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.226674 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.231682 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.238881 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.245629 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.254464 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.256878 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.261812 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.274390 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.294359 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-etc-kubernetes\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.294423 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d645541b-4940-4e53-a506-1b42bd296dfb-system-cni-dir\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.294454 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-run-ovn-kubernetes\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.294418 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.294604 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-etc-kubernetes\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.294630 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-run-ovn-kubernetes\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.294630 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d645541b-4940-4e53-a506-1b42bd296dfb-system-cni-dir\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.294485 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-os-release\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.294832 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d645541b-4940-4e53-a506-1b42bd296dfb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.294871 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-kubelet\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.294913 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-cni-netd\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.294949 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-multus-conf-dir\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.294972 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-kubelet\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.294986 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brktz\" (UniqueName: \"kubernetes.io/projected/5992e46c-bce7-4b9f-82f2-c7ffb93286cd-kube-api-access-brktz\") pod \"machine-config-daemon-4s95t\" (UID: \"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\") " pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.294998 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-cni-netd\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295037 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fb77d03e-6ead-48b5-a96a-db4cbd540192-cni-binary-copy\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295084 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-os-release\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295133 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295167 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295027 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-multus-conf-dir\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295195 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-var-lib-cni-bin\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295229 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gznnx\" (UniqueName: \"kubernetes.io/projected/fb77d03e-6ead-48b5-a96a-db4cbd540192-kube-api-access-gznnx\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295268 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-multus-socket-dir-parent\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295289 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2afb01bb-2288-4e50-aa66-3e5f2685af58-hosts-file\") pod \"node-resolver-l6v69\" (UID: \"2afb01bb-2288-4e50-aa66-3e5f2685af58\") " pod="openshift-dns/node-resolver-l6v69" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295306 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-var-lib-cni-bin\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295320 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295372 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295416 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-etc-openvswitch\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295376 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-etc-openvswitch\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295471 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-ovnkube-script-lib\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295521 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-var-lib-kubelet\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295549 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-multus-cni-dir\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295568 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-multus-socket-dir-parent\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295580 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5992e46c-bce7-4b9f-82f2-c7ffb93286cd-proxy-tls\") pod \"machine-config-daemon-4s95t\" (UID: \"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\") " pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295625 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd8lr\" (UniqueName: \"kubernetes.io/projected/d645541b-4940-4e53-a506-1b42bd296dfb-kube-api-access-nd8lr\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295653 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-var-lib-openvswitch\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295672 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295674 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d645541b-4940-4e53-a506-1b42bd296dfb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295689 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-run-k8s-cni-cncf-io\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295747 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d645541b-4940-4e53-a506-1b42bd296dfb-os-release\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295774 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-cni-bin\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295800 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-openvswitch\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295836 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-cnibin\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295866 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-var-lib-openvswitch\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295865 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-run-netns\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295897 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-run-netns\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295907 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-var-lib-cni-multus\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295709 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-run-k8s-cni-cncf-io\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295938 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fb77d03e-6ead-48b5-a96a-db4cbd540192-multus-daemon-config\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295974 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-node-log\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295975 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d645541b-4940-4e53-a506-1b42bd296dfb-os-release\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295991 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-ovnkube-config\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296012 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-system-cni-dir\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296038 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-openvswitch\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296077 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5992e46c-bce7-4b9f-82f2-c7ffb93286cd-rootfs\") pod \"machine-config-daemon-4s95t\" (UID: \"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\") " pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296082 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296100 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-var-lib-cni-multus\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296181 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-cnibin\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296044 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5992e46c-bce7-4b9f-82f2-c7ffb93286cd-rootfs\") pod \"machine-config-daemon-4s95t\" (UID: \"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\") " pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296225 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5992e46c-bce7-4b9f-82f2-c7ffb93286cd-mcd-auth-proxy-config\") pod \"machine-config-daemon-4s95t\" (UID: \"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\") " pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296253 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/34391a30-5865-46e9-af5f-705cc3b11fba-ovn-node-metrics-cert\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296278 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmqj7\" (UniqueName: \"kubernetes.io/projected/34391a30-5865-46e9-af5f-705cc3b11fba-kube-api-access-kmqj7\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296305 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-hostroot\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296352 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d645541b-4940-4e53-a506-1b42bd296dfb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296382 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-run-netns\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296396 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fb77d03e-6ead-48b5-a96a-db4cbd540192-cni-binary-copy\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296411 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-env-overrides\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296489 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d645541b-4940-4e53-a506-1b42bd296dfb-cnibin\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296567 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d645541b-4940-4e53-a506-1b42bd296dfb-cni-binary-copy\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296606 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-ovn\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296639 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-run-multus-certs\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296692 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64stb\" (UniqueName: \"kubernetes.io/projected/2afb01bb-2288-4e50-aa66-3e5f2685af58-kube-api-access-64stb\") pod \"node-resolver-l6v69\" (UID: \"2afb01bb-2288-4e50-aa66-3e5f2685af58\") " pod="openshift-dns/node-resolver-l6v69" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296704 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fb77d03e-6ead-48b5-a96a-db4cbd540192-multus-daemon-config\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296728 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-systemd-units\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296749 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-node-log\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296766 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-log-socket\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296801 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-slash\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296813 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-run-multus-certs\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296834 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-systemd\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.295433 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/2afb01bb-2288-4e50-aa66-3e5f2685af58-hosts-file\") pod \"node-resolver-l6v69\" (UID: \"2afb01bb-2288-4e50-aa66-3e5f2685af58\") " pod="openshift-dns/node-resolver-l6v69" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.296012 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-cni-bin\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.297471 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-ovnkube-config\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.297567 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-system-cni-dir\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.297616 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-systemd\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.297810 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-env-overrides\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.297846 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-systemd-units\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.297818 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-log-socket\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.297902 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-hostroot\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.297912 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d645541b-4940-4e53-a506-1b42bd296dfb-cnibin\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.297934 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-host-var-lib-kubelet\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298038 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-slash\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298069 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fb77d03e-6ead-48b5-a96a-db4cbd540192-multus-cni-dir\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298208 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-run-netns\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298493 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-ovnkube-script-lib\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298583 4867 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298621 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d645541b-4940-4e53-a506-1b42bd296dfb-cni-binary-copy\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298632 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298671 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298690 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/d645541b-4940-4e53-a506-1b42bd296dfb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298698 4867 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298757 4867 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298774 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298787 4867 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298800 4867 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298814 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298813 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5992e46c-bce7-4b9f-82f2-c7ffb93286cd-mcd-auth-proxy-config\") pod \"machine-config-daemon-4s95t\" (UID: \"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\") " pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298826 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298859 4867 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298877 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298891 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298906 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298922 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298933 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.298989 4867 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299006 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299039 4867 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299048 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299057 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299068 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299079 4867 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299090 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299130 4867 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299142 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299152 4867 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299164 4867 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299210 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299227 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299239 4867 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299252 4867 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299263 4867 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299300 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299314 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299325 4867 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299337 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299349 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299388 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299404 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299416 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299428 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299465 4867 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299479 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299490 4867 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299538 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299553 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299564 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299575 4867 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299586 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299624 4867 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299636 4867 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299647 4867 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299662 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299674 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299717 4867 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299729 4867 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299742 4867 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299754 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299800 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299812 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299825 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299837 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299880 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299894 4867 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299907 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299919 4867 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299957 4867 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299969 4867 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299980 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.299992 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300003 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300039 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300054 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300066 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300080 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300092 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300130 4867 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300143 4867 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300154 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300166 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300202 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300217 4867 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300228 4867 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300238 4867 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300250 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300288 4867 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300301 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300312 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300322 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300333 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300369 4867 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300383 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300394 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300404 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300416 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300451 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300464 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.300490 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-ovn\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.302274 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5992e46c-bce7-4b9f-82f2-c7ffb93286cd-proxy-tls\") pod \"machine-config-daemon-4s95t\" (UID: \"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\") " pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.302383 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/34391a30-5865-46e9-af5f-705cc3b11fba-ovn-node-metrics-cert\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.306493 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.315163 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64stb\" (UniqueName: \"kubernetes.io/projected/2afb01bb-2288-4e50-aa66-3e5f2685af58-kube-api-access-64stb\") pod \"node-resolver-l6v69\" (UID: \"2afb01bb-2288-4e50-aa66-3e5f2685af58\") " pod="openshift-dns/node-resolver-l6v69" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.316419 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brktz\" (UniqueName: \"kubernetes.io/projected/5992e46c-bce7-4b9f-82f2-c7ffb93286cd-kube-api-access-brktz\") pod \"machine-config-daemon-4s95t\" (UID: \"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\") " pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.319233 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd8lr\" (UniqueName: \"kubernetes.io/projected/d645541b-4940-4e53-a506-1b42bd296dfb-kube-api-access-nd8lr\") pod \"multus-additional-cni-plugins-9st5b\" (UID: \"d645541b-4940-4e53-a506-1b42bd296dfb\") " pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.320074 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gznnx\" (UniqueName: \"kubernetes.io/projected/fb77d03e-6ead-48b5-a96a-db4cbd540192-kube-api-access-gznnx\") pod \"multus-fl729\" (UID: \"fb77d03e-6ead-48b5-a96a-db4cbd540192\") " pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.321208 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmqj7\" (UniqueName: \"kubernetes.io/projected/34391a30-5865-46e9-af5f-705cc3b11fba-kube-api-access-kmqj7\") pod \"ovnkube-node-6nndn\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.321807 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.334527 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.379472 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.424854 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.430821 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.438857 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-l6v69" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.446207 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 14 04:09:51 crc kubenswrapper[4867]: W0214 04:09:51.449762 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-8f62492d6d7982717b9ac621b1bb111c49bae9d6da799e0f1b454669693102be WatchSource:0}: Error finding container 8f62492d6d7982717b9ac621b1bb111c49bae9d6da799e0f1b454669693102be: Status 404 returned error can't find the container with id 8f62492d6d7982717b9ac621b1bb111c49bae9d6da799e0f1b454669693102be Feb 14 04:09:51 crc kubenswrapper[4867]: W0214 04:09:51.451120 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-e0339c932b98dcb5cf2ac59eef115a838c0d9243ce93b773e55a3a02f67b6fa3 WatchSource:0}: Error finding container e0339c932b98dcb5cf2ac59eef115a838c0d9243ce93b773e55a3a02f67b6fa3: Status 404 returned error can't find the container with id e0339c932b98dcb5cf2ac59eef115a838c0d9243ce93b773e55a3a02f67b6fa3 Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.459075 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.467698 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fl729" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.475238 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-9st5b" Feb 14 04:09:51 crc kubenswrapper[4867]: W0214 04:09:51.476481 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-717c03e8fbb0972418c14940e7fc89e04cc838574e077da1cf7a1741efa88f2c WatchSource:0}: Error finding container 717c03e8fbb0972418c14940e7fc89e04cc838574e077da1cf7a1741efa88f2c: Status 404 returned error can't find the container with id 717c03e8fbb0972418c14940e7fc89e04cc838574e077da1cf7a1741efa88f2c Feb 14 04:09:51 crc kubenswrapper[4867]: W0214 04:09:51.489578 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5992e46c_bce7_4b9f_82f2_c7ffb93286cd.slice/crio-8d37ac335c77fd83330a4118ee880ad78776f98dc45e069afd59cee1eb4a1840 WatchSource:0}: Error finding container 8d37ac335c77fd83330a4118ee880ad78776f98dc45e069afd59cee1eb4a1840: Status 404 returned error can't find the container with id 8d37ac335c77fd83330a4118ee880ad78776f98dc45e069afd59cee1eb4a1840 Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.498971 4867 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-14 04:04:50 +0000 UTC, rotation deadline is 2026-11-07 05:31:29.29019253 +0000 UTC Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.499549 4867 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6385h21m37.7906488s for next certificate rotation Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.502124 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:51 crc kubenswrapper[4867]: W0214 04:09:51.513033 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb77d03e_6ead_48b5_a96a_db4cbd540192.slice/crio-873bdb0e3e5dc5374d35049d34e08f519588b676fecf70d774756d715ce02331 WatchSource:0}: Error finding container 873bdb0e3e5dc5374d35049d34e08f519588b676fecf70d774756d715ce02331: Status 404 returned error can't find the container with id 873bdb0e3e5dc5374d35049d34e08f519588b676fecf70d774756d715ce02331 Feb 14 04:09:51 crc kubenswrapper[4867]: W0214 04:09:51.540344 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd645541b_4940_4e53_a506_1b42bd296dfb.slice/crio-433c012dfbbda658b5dd2c476aba6a094b1c71e752198db24f61fe1beedfcf8a WatchSource:0}: Error finding container 433c012dfbbda658b5dd2c476aba6a094b1c71e752198db24f61fe1beedfcf8a: Status 404 returned error can't find the container with id 433c012dfbbda658b5dd2c476aba6a094b1c71e752198db24f61fe1beedfcf8a Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.711648 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.712006 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:09:52.711959191 +0000 UTC m=+24.792896505 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.712172 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.712201 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.712231 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.712255 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.712283 4867 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.712333 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 04:09:52.712319071 +0000 UTC m=+24.793256385 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.712351 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.712365 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.712375 4867 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.712402 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 04:09:52.712393843 +0000 UTC m=+24.793331157 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.712472 4867 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.712514 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 04:09:52.712488115 +0000 UTC m=+24.793425429 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.712554 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.712564 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.712570 4867 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.712588 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 04:09:52.712582828 +0000 UTC m=+24.793520142 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.775712 4867 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.778068 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.778102 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.778113 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.778211 4867 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.790068 4867 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.790454 4867 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.794895 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.795064 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.795157 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.795246 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.795322 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:51Z","lastTransitionTime":"2026-02-14T04:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.806881 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.810912 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.810949 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.810958 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.810975 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.810984 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:51Z","lastTransitionTime":"2026-02-14T04:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.822623 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.825744 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.825778 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.825787 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.825802 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.825811 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:51Z","lastTransitionTime":"2026-02-14T04:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.839392 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.846451 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.846490 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.846520 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.846540 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.846552 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:51Z","lastTransitionTime":"2026-02-14T04:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.859254 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.862801 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.862832 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.862842 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.862859 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.862869 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:51Z","lastTransitionTime":"2026-02-14T04:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.875874 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:51 crc kubenswrapper[4867]: E0214 04:09:51.876033 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.881777 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.881824 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.881837 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.881855 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.881864 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:51Z","lastTransitionTime":"2026-02-14T04:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.984227 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.984269 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.984282 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.984300 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:51 crc kubenswrapper[4867]: I0214 04:09:51.984311 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:51Z","lastTransitionTime":"2026-02-14T04:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.086674 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.086726 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.086738 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.086768 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.086781 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:52Z","lastTransitionTime":"2026-02-14T04:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.090890 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 05:48:28.97479689 +0000 UTC Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.189213 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.189258 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.189268 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.189290 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.189301 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:52Z","lastTransitionTime":"2026-02-14T04:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.195645 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"717c03e8fbb0972418c14940e7fc89e04cc838574e077da1cf7a1741efa88f2c"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.197210 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-l6v69" event={"ID":"2afb01bb-2288-4e50-aa66-3e5f2685af58","Type":"ContainerStarted","Data":"a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.197261 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-l6v69" event={"ID":"2afb01bb-2288-4e50-aa66-3e5f2685af58","Type":"ContainerStarted","Data":"f590eff1e465dd61ee0ef4b9d9a120ddae3c21e03088c79a3e6e0cecc1c6f79e"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.198839 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.198892 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.198903 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e0339c932b98dcb5cf2ac59eef115a838c0d9243ce93b773e55a3a02f67b6fa3"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.200222 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.200246 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8f62492d6d7982717b9ac621b1bb111c49bae9d6da799e0f1b454669693102be"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.203088 4867 generic.go:334] "Generic (PLEG): container finished" podID="34391a30-5865-46e9-af5f-705cc3b11fba" containerID="cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288" exitCode=0 Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.203172 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerDied","Data":"cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.203208 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerStarted","Data":"766035eb89c0e6059ab573e34c9ca67206f8aeefdcb68c749029bbaceeefc307"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.205278 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" event={"ID":"d645541b-4940-4e53-a506-1b42bd296dfb","Type":"ContainerStarted","Data":"feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.205305 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" event={"ID":"d645541b-4940-4e53-a506-1b42bd296dfb","Type":"ContainerStarted","Data":"433c012dfbbda658b5dd2c476aba6a094b1c71e752198db24f61fe1beedfcf8a"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.206474 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fl729" event={"ID":"fb77d03e-6ead-48b5-a96a-db4cbd540192","Type":"ContainerStarted","Data":"6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.206497 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fl729" event={"ID":"fb77d03e-6ead-48b5-a96a-db4cbd540192","Type":"ContainerStarted","Data":"873bdb0e3e5dc5374d35049d34e08f519588b676fecf70d774756d715ce02331"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.208701 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.208728 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.208740 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"8d37ac335c77fd83330a4118ee880ad78776f98dc45e069afd59cee1eb4a1840"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.214796 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.222266 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.230315 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.247113 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.258864 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.270265 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.279749 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.290790 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.292312 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.292363 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.292375 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.292393 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.292405 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:52Z","lastTransitionTime":"2026-02-14T04:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.304998 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.314588 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.324289 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.333656 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.343297 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.353837 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.379400 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.395579 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.395612 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.395621 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.395640 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.395652 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:52Z","lastTransitionTime":"2026-02-14T04:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.400086 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.415227 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.428766 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.445643 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.462014 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.473782 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.484525 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.494989 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.498088 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.498120 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.498133 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.498152 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.498164 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:52Z","lastTransitionTime":"2026-02-14T04:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.507311 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.600326 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.600359 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.600367 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.600381 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.600391 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:52Z","lastTransitionTime":"2026-02-14T04:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.702935 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.703001 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.703016 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.703045 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.703065 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:52Z","lastTransitionTime":"2026-02-14T04:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.723047 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.723146 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.723173 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:09:52 crc kubenswrapper[4867]: E0214 04:09:52.723280 4867 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 04:09:52 crc kubenswrapper[4867]: E0214 04:09:52.723297 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:09:54.723244187 +0000 UTC m=+26.804181511 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:09:52 crc kubenswrapper[4867]: E0214 04:09:52.723350 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 04:09:54.7233374 +0000 UTC m=+26.804274934 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 04:09:52 crc kubenswrapper[4867]: E0214 04:09:52.723357 4867 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 04:09:52 crc kubenswrapper[4867]: E0214 04:09:52.723478 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 04:09:54.723453953 +0000 UTC m=+26.804391267 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.723398 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.723599 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:09:52 crc kubenswrapper[4867]: E0214 04:09:52.723638 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 04:09:52 crc kubenswrapper[4867]: E0214 04:09:52.723661 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 04:09:52 crc kubenswrapper[4867]: E0214 04:09:52.723680 4867 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:52 crc kubenswrapper[4867]: E0214 04:09:52.723736 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 04:09:54.72372481 +0000 UTC m=+26.804662144 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:52 crc kubenswrapper[4867]: E0214 04:09:52.723778 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 04:09:52 crc kubenswrapper[4867]: E0214 04:09:52.723794 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 04:09:52 crc kubenswrapper[4867]: E0214 04:09:52.723808 4867 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:52 crc kubenswrapper[4867]: E0214 04:09:52.723837 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 04:09:54.723829753 +0000 UTC m=+26.804767067 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.805200 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.805418 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.805587 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.805669 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.805728 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:52Z","lastTransitionTime":"2026-02-14T04:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.908793 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.908838 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.908851 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.908869 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.908881 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:52Z","lastTransitionTime":"2026-02-14T04:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.999015 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:09:52 crc kubenswrapper[4867]: E0214 04:09:52.999120 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.999417 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:09:52 crc kubenswrapper[4867]: E0214 04:09:52.999471 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:09:52 crc kubenswrapper[4867]: I0214 04:09:52.999527 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:09:52 crc kubenswrapper[4867]: E0214 04:09:52.999577 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.002435 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.003189 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.004368 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.005006 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.006032 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.006588 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.007176 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.010850 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.010876 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.010911 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.010925 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.010936 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:53Z","lastTransitionTime":"2026-02-14T04:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.011057 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.011854 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.012861 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.013439 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.015882 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.016430 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.016958 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.017907 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.018400 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.019413 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.019875 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.020438 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.021488 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.022001 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.022926 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.023346 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.024314 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.024779 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.025413 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.027618 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.028090 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.029050 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.029552 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.030383 4867 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.030482 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.032034 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.032907 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.033362 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.035168 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.036173 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.036674 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.037295 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.038363 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.038860 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.039926 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.040922 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.041548 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.042450 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.043011 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.043912 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.044640 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.045467 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.046023 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.046468 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.047342 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.047934 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.048873 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.091741 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 17:56:06.882159776 +0000 UTC Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.113893 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.113948 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.113958 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.113974 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.113984 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:53Z","lastTransitionTime":"2026-02-14T04:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.218447 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.218494 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.218523 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.218549 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.218564 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:53Z","lastTransitionTime":"2026-02-14T04:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.222002 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerStarted","Data":"d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633"} Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.222056 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerStarted","Data":"250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307"} Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.222066 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerStarted","Data":"92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18"} Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.222076 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerStarted","Data":"669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6"} Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.222085 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerStarted","Data":"e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e"} Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.223924 4867 generic.go:334] "Generic (PLEG): container finished" podID="d645541b-4940-4e53-a506-1b42bd296dfb" containerID="feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2" exitCode=0 Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.223962 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" event={"ID":"d645541b-4940-4e53-a506-1b42bd296dfb","Type":"ContainerDied","Data":"feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2"} Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.242119 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.264946 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.282911 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.303123 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.322438 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.326418 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.326994 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.327778 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.327924 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.328028 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:53Z","lastTransitionTime":"2026-02-14T04:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.345587 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.362376 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.377700 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.398877 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.414031 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.427120 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.431926 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.431971 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.431982 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.432003 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.432016 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:53Z","lastTransitionTime":"2026-02-14T04:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.447450 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.536303 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.536351 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.536364 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.536385 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.536420 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:53Z","lastTransitionTime":"2026-02-14T04:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.640246 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.640561 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.640569 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.640586 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.640595 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:53Z","lastTransitionTime":"2026-02-14T04:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.743785 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.743843 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.743856 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.743882 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.743896 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:53Z","lastTransitionTime":"2026-02-14T04:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.846778 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.846951 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.847210 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.847365 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.847574 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:53Z","lastTransitionTime":"2026-02-14T04:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.950750 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.950814 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.950829 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.950863 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:53 crc kubenswrapper[4867]: I0214 04:09:53.950883 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:53Z","lastTransitionTime":"2026-02-14T04:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.032261 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.050263 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.053570 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.053613 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.053624 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.053642 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.053657 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:54Z","lastTransitionTime":"2026-02-14T04:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.055093 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.055759 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.080403 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.092842 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 23:48:01.230754758 +0000 UTC Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.098216 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.115216 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.129354 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.140588 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.151964 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.155853 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.155903 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.155914 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.155937 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.155951 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:54Z","lastTransitionTime":"2026-02-14T04:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.163474 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.181395 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.196431 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.212541 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.233374 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerStarted","Data":"ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd"} Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.235874 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" event={"ID":"d645541b-4940-4e53-a506-1b42bd296dfb","Type":"ContainerDied","Data":"a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3"} Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.235881 4867 generic.go:334] "Generic (PLEG): container finished" podID="d645541b-4940-4e53-a506-1b42bd296dfb" containerID="a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3" exitCode=0 Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.241221 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.256772 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.259415 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.259527 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.259545 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.259570 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.259586 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:54Z","lastTransitionTime":"2026-02-14T04:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.279681 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.294898 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.309231 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.324309 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.342581 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.358159 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.363218 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.363276 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.363293 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.363319 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.363332 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:54Z","lastTransitionTime":"2026-02-14T04:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.378808 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.394404 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.408322 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.420331 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.434590 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.448720 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:54Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.466029 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.466177 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.466285 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.466379 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.466444 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:54Z","lastTransitionTime":"2026-02-14T04:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.570694 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.570742 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.570759 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.570781 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.570796 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:54Z","lastTransitionTime":"2026-02-14T04:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.675276 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.675356 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.675374 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.675404 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.675424 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:54Z","lastTransitionTime":"2026-02-14T04:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.744146 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:09:54 crc kubenswrapper[4867]: E0214 04:09:54.744561 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:09:58.744450003 +0000 UTC m=+30.825387367 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.744725 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.744894 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.744968 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.745103 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:09:54 crc kubenswrapper[4867]: E0214 04:09:54.745145 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 04:09:54 crc kubenswrapper[4867]: E0214 04:09:54.745204 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 04:09:54 crc kubenswrapper[4867]: E0214 04:09:54.745237 4867 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:54 crc kubenswrapper[4867]: E0214 04:09:54.745148 4867 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 04:09:54 crc kubenswrapper[4867]: E0214 04:09:54.745350 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 04:09:58.745325376 +0000 UTC m=+30.826262730 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:54 crc kubenswrapper[4867]: E0214 04:09:54.745368 4867 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 04:09:54 crc kubenswrapper[4867]: E0214 04:09:54.745433 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 04:09:58.745395378 +0000 UTC m=+30.826332732 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 04:09:54 crc kubenswrapper[4867]: E0214 04:09:54.745391 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 04:09:54 crc kubenswrapper[4867]: E0214 04:09:54.745495 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 04:09:54 crc kubenswrapper[4867]: E0214 04:09:54.745551 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 04:09:58.74547043 +0000 UTC m=+30.826407774 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 04:09:54 crc kubenswrapper[4867]: E0214 04:09:54.745559 4867 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:54 crc kubenswrapper[4867]: E0214 04:09:54.745624 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 04:09:58.745609604 +0000 UTC m=+30.826546958 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.778482 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.778584 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.778603 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.778632 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.778654 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:54Z","lastTransitionTime":"2026-02-14T04:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.883873 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.884614 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.884637 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.884668 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.884687 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:54Z","lastTransitionTime":"2026-02-14T04:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.989553 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.989607 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.989618 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.989636 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.989648 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:54Z","lastTransitionTime":"2026-02-14T04:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.996817 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.996948 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:09:54 crc kubenswrapper[4867]: E0214 04:09:54.996996 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:09:54 crc kubenswrapper[4867]: E0214 04:09:54.997189 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:09:54 crc kubenswrapper[4867]: I0214 04:09:54.996817 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:09:54 crc kubenswrapper[4867]: E0214 04:09:54.997373 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.092955 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 03:35:23.241655058 +0000 UTC Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.093228 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.093266 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.093275 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.093294 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.093304 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:55Z","lastTransitionTime":"2026-02-14T04:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.196457 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.196574 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.196599 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.196639 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.196658 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:55Z","lastTransitionTime":"2026-02-14T04:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.242396 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8"} Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.245948 4867 generic.go:334] "Generic (PLEG): container finished" podID="d645541b-4940-4e53-a506-1b42bd296dfb" containerID="26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2" exitCode=0 Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.246011 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" event={"ID":"d645541b-4940-4e53-a506-1b42bd296dfb","Type":"ContainerDied","Data":"26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2"} Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.273210 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.292707 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.300258 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.300302 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.300314 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.300340 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.300356 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:55Z","lastTransitionTime":"2026-02-14T04:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.322614 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.341455 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.358635 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.375895 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.395295 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.404034 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.404096 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.404123 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.404173 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.404201 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:55Z","lastTransitionTime":"2026-02-14T04:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.417154 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.429565 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.447607 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.462709 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.480154 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.494894 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.507951 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.507997 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.508007 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.508026 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.508038 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:55Z","lastTransitionTime":"2026-02-14T04:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.510911 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.530440 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.548490 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.568143 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.583734 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.598708 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.610464 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.611796 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.611827 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.611836 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.611853 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.611865 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:55Z","lastTransitionTime":"2026-02-14T04:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.625264 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.642232 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.659468 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.670681 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.685209 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.709523 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:55Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.714660 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.714745 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.714767 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.714803 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.714824 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:55Z","lastTransitionTime":"2026-02-14T04:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.817898 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.817944 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.817955 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.817973 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.817986 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:55Z","lastTransitionTime":"2026-02-14T04:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.920846 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.920880 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.920892 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.920907 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:55 crc kubenswrapper[4867]: I0214 04:09:55.920918 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:55Z","lastTransitionTime":"2026-02-14T04:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.023629 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.023670 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.023679 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.023698 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.023709 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:56Z","lastTransitionTime":"2026-02-14T04:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.093598 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 10:22:57.845939 +0000 UTC Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.126232 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.126278 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.126293 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.126314 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.126326 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:56Z","lastTransitionTime":"2026-02-14T04:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.228874 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.228958 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.228977 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.229015 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.229036 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:56Z","lastTransitionTime":"2026-02-14T04:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.253412 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerStarted","Data":"b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5"} Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.256682 4867 generic.go:334] "Generic (PLEG): container finished" podID="d645541b-4940-4e53-a506-1b42bd296dfb" containerID="9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed" exitCode=0 Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.256892 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" event={"ID":"d645541b-4940-4e53-a506-1b42bd296dfb","Type":"ContainerDied","Data":"9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed"} Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.274763 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.302398 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.321366 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.335406 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.335444 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.335463 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.335481 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.335492 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:56Z","lastTransitionTime":"2026-02-14T04:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.339688 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.354224 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.365355 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.368813 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-qbv2g"] Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.369153 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qbv2g" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.370436 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.371793 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.371910 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.371935 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.380969 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.401261 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.416277 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.427766 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.437609 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.437633 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.437642 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.437658 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.437669 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:56Z","lastTransitionTime":"2026-02-14T04:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.440755 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.454599 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.463393 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghrlq\" (UniqueName: \"kubernetes.io/projected/e55b70fd-de82-48c9-b879-de727928e084-kube-api-access-ghrlq\") pod \"node-ca-qbv2g\" (UID: \"e55b70fd-de82-48c9-b879-de727928e084\") " pod="openshift-image-registry/node-ca-qbv2g" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.463434 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e55b70fd-de82-48c9-b879-de727928e084-host\") pod \"node-ca-qbv2g\" (UID: \"e55b70fd-de82-48c9-b879-de727928e084\") " pod="openshift-image-registry/node-ca-qbv2g" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.463451 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e55b70fd-de82-48c9-b879-de727928e084-serviceca\") pod \"node-ca-qbv2g\" (UID: \"e55b70fd-de82-48c9-b879-de727928e084\") " pod="openshift-image-registry/node-ca-qbv2g" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.471539 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.485608 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.497209 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.510479 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.528412 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.540007 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.540047 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.540057 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.540072 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.540083 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:56Z","lastTransitionTime":"2026-02-14T04:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.541277 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.553990 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.563949 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghrlq\" (UniqueName: \"kubernetes.io/projected/e55b70fd-de82-48c9-b879-de727928e084-kube-api-access-ghrlq\") pod \"node-ca-qbv2g\" (UID: \"e55b70fd-de82-48c9-b879-de727928e084\") " pod="openshift-image-registry/node-ca-qbv2g" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.564001 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e55b70fd-de82-48c9-b879-de727928e084-host\") pod \"node-ca-qbv2g\" (UID: \"e55b70fd-de82-48c9-b879-de727928e084\") " pod="openshift-image-registry/node-ca-qbv2g" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.564021 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e55b70fd-de82-48c9-b879-de727928e084-serviceca\") pod \"node-ca-qbv2g\" (UID: \"e55b70fd-de82-48c9-b879-de727928e084\") " pod="openshift-image-registry/node-ca-qbv2g" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.564114 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e55b70fd-de82-48c9-b879-de727928e084-host\") pod \"node-ca-qbv2g\" (UID: \"e55b70fd-de82-48c9-b879-de727928e084\") " pod="openshift-image-registry/node-ca-qbv2g" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.565121 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/e55b70fd-de82-48c9-b879-de727928e084-serviceca\") pod \"node-ca-qbv2g\" (UID: \"e55b70fd-de82-48c9-b879-de727928e084\") " pod="openshift-image-registry/node-ca-qbv2g" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.568108 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.581855 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.588650 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghrlq\" (UniqueName: \"kubernetes.io/projected/e55b70fd-de82-48c9-b879-de727928e084-kube-api-access-ghrlq\") pod \"node-ca-qbv2g\" (UID: \"e55b70fd-de82-48c9-b879-de727928e084\") " pod="openshift-image-registry/node-ca-qbv2g" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.597489 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.619830 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.633654 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.642493 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.642548 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.642560 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.642611 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.642627 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:56Z","lastTransitionTime":"2026-02-14T04:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.647446 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.661265 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.675523 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:56Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.687299 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qbv2g" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.747158 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.747649 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.747661 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.747680 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.747693 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:56Z","lastTransitionTime":"2026-02-14T04:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.851760 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.851788 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.851796 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.851809 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.851818 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:56Z","lastTransitionTime":"2026-02-14T04:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.955446 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.955492 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.955557 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.955591 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.955605 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:56Z","lastTransitionTime":"2026-02-14T04:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.996619 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.996675 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:09:56 crc kubenswrapper[4867]: I0214 04:09:56.996716 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:09:56 crc kubenswrapper[4867]: E0214 04:09:56.996898 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:09:56 crc kubenswrapper[4867]: E0214 04:09:56.997115 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:09:56 crc kubenswrapper[4867]: E0214 04:09:56.997232 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.059069 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.059141 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.059161 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.059190 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.059212 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:57Z","lastTransitionTime":"2026-02-14T04:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.094579 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 01:13:23.403798782 +0000 UTC Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.162489 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.162582 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.162592 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.162622 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.162634 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:57Z","lastTransitionTime":"2026-02-14T04:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.262009 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qbv2g" event={"ID":"e55b70fd-de82-48c9-b879-de727928e084","Type":"ContainerStarted","Data":"4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355"} Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.262092 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qbv2g" event={"ID":"e55b70fd-de82-48c9-b879-de727928e084","Type":"ContainerStarted","Data":"a9291288bede55d3a5542beca321ac1c9b6dcd94142fc8fcac273384dc5764c8"} Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.265178 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.265227 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.265245 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.265268 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.265285 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:57Z","lastTransitionTime":"2026-02-14T04:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.267777 4867 generic.go:334] "Generic (PLEG): container finished" podID="d645541b-4940-4e53-a506-1b42bd296dfb" containerID="b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57" exitCode=0 Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.267837 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" event={"ID":"d645541b-4940-4e53-a506-1b42bd296dfb","Type":"ContainerDied","Data":"b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57"} Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.287768 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.305295 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.322885 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.343302 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.363457 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.368269 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.368331 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.368351 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.368375 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.368390 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:57Z","lastTransitionTime":"2026-02-14T04:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.380127 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.394789 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.409784 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.434994 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.458359 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.471389 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.471440 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.471452 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.471479 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.471493 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:57Z","lastTransitionTime":"2026-02-14T04:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.480836 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.496207 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.511705 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.527289 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.540873 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.555670 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.569419 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.573704 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.573752 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.573763 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.573783 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.573793 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:57Z","lastTransitionTime":"2026-02-14T04:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.584184 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.597624 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.607713 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.619341 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.628541 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.642779 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.665598 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.678677 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.678723 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.678732 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.678747 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.678758 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:57Z","lastTransitionTime":"2026-02-14T04:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.680052 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.692265 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.705666 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.727062 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:57Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.780737 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.780806 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.780825 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.780855 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.780875 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:57Z","lastTransitionTime":"2026-02-14T04:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.888239 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.888769 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.888783 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.888808 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.888824 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:57Z","lastTransitionTime":"2026-02-14T04:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.991659 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.991693 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.991705 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.991723 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:57 crc kubenswrapper[4867]: I0214 04:09:57.991737 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:57Z","lastTransitionTime":"2026-02-14T04:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.093786 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.093822 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.093832 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.093847 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.093859 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:58Z","lastTransitionTime":"2026-02-14T04:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.095020 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 03:12:48.974108267 +0000 UTC Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.196131 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.196167 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.196176 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.196191 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.196200 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:58Z","lastTransitionTime":"2026-02-14T04:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.280358 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerStarted","Data":"32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8"} Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.280441 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.280609 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.286429 4867 generic.go:334] "Generic (PLEG): container finished" podID="d645541b-4940-4e53-a506-1b42bd296dfb" containerID="84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7" exitCode=0 Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.286559 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" event={"ID":"d645541b-4940-4e53-a506-1b42bd296dfb","Type":"ContainerDied","Data":"84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7"} Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.296159 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.300695 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.300741 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.300750 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.300769 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.300786 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:58Z","lastTransitionTime":"2026-02-14T04:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.309080 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.309199 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.314584 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.333318 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.346814 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.360629 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.379643 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.395940 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.404588 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.404639 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.404661 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.404691 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.404710 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:58Z","lastTransitionTime":"2026-02-14T04:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.409342 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.425278 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.441985 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.459465 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.479423 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.497933 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.508202 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.508257 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.508269 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.508291 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.508305 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:58Z","lastTransitionTime":"2026-02-14T04:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.522350 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.542528 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.554876 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.576398 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.598640 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.610466 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.610520 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.610534 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.610575 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.610588 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:58Z","lastTransitionTime":"2026-02-14T04:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.611346 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.628422 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.645163 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.658103 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.670077 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.685578 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.703953 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.713115 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.713156 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.713204 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.713228 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.713243 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:58Z","lastTransitionTime":"2026-02-14T04:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.717207 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.730676 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.747402 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:58Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.748296 4867 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.749059 4867 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/events\": read tcp 38.102.83.113:33772->38.102.83.113:6443: use of closed network connection" event="&Event{ObjectMeta:{multus-additional-cni-plugins-9st5b.1894017f1272e637 openshift-multus 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-multus,Name:multus-additional-cni-plugins-9st5b,UID:d645541b-4940-4e53-a506-1b42bd296dfb,APIVersion:v1,ResourceVersion:26450,FieldPath:spec.containers{kube-multus-additional-cni-plugins},},Reason:Started,Message:Started container kube-multus-additional-cni-plugins,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-14 04:09:58.745441847 +0000 UTC m=+30.826379161,LastTimestamp:2026-02-14 04:09:58.745441847 +0000 UTC m=+30.826379161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.783216 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.783311 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.783333 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.783361 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.783381 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.783487 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.783520 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.783520 4867 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.783627 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:06.783606256 +0000 UTC m=+38.864543570 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.783531 4867 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.783714 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.783749 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:06.783718069 +0000 UTC m=+38.864655383 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.783759 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.783630 4867 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.783783 4867 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.783813 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:06.783805871 +0000 UTC m=+38.864743185 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.783851 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:06.783829712 +0000 UTC m=+38.864767046 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.783876 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:10:06.783866233 +0000 UTC m=+38.864803557 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.815398 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.815420 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.815429 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.815444 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.815453 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:58Z","lastTransitionTime":"2026-02-14T04:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.918350 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.918406 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.918422 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.918443 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.918460 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:58Z","lastTransitionTime":"2026-02-14T04:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.996867 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.997027 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.997052 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:09:58 crc kubenswrapper[4867]: I0214 04:09:58.997081 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.997210 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:09:58 crc kubenswrapper[4867]: E0214 04:09:58.997341 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.013185 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.021152 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.021187 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.021197 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.021212 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.021222 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:59Z","lastTransitionTime":"2026-02-14T04:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.028370 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.046354 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.071205 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.089190 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.095407 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 17:49:21.56343548 +0000 UTC Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.103353 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.116468 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.123094 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.123162 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.123175 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.123194 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.123216 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:59Z","lastTransitionTime":"2026-02-14T04:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.130831 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.142104 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.156307 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.170801 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.183009 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.196066 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.208705 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.226444 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.226690 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.226759 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.226835 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.226902 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:59Z","lastTransitionTime":"2026-02-14T04:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.297388 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" event={"ID":"d645541b-4940-4e53-a506-1b42bd296dfb","Type":"ContainerStarted","Data":"80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae"} Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.297487 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.314934 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.327639 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.329152 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.329320 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.329420 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.329546 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.329637 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:59Z","lastTransitionTime":"2026-02-14T04:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.340209 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.361968 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.372213 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.383025 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.399804 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.413618 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.432757 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.432837 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.432857 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.432906 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.432931 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:59Z","lastTransitionTime":"2026-02-14T04:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.434678 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.454781 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.476791 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.490005 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.504045 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.520944 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:09:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.535590 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.535638 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.535651 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.535671 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.535682 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:59Z","lastTransitionTime":"2026-02-14T04:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.637659 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.637929 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.637939 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.637955 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.637966 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:59Z","lastTransitionTime":"2026-02-14T04:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.741473 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.741522 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.741533 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.741550 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.741621 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:59Z","lastTransitionTime":"2026-02-14T04:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.843710 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.843759 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.843772 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.843794 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.843805 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:59Z","lastTransitionTime":"2026-02-14T04:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.946131 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.946163 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.946171 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.946183 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:09:59 crc kubenswrapper[4867]: I0214 04:09:59.946192 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:09:59Z","lastTransitionTime":"2026-02-14T04:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.048494 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.048562 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.048571 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.048641 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.048652 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:00Z","lastTransitionTime":"2026-02-14T04:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.095758 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 05:51:22.785367467 +0000 UTC Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.151049 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.151077 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.151085 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.151097 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.151106 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:00Z","lastTransitionTime":"2026-02-14T04:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.255614 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.255666 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.255683 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.255706 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.255724 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:00Z","lastTransitionTime":"2026-02-14T04:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.318257 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.357849 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.357900 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.357911 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.357927 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.357937 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:00Z","lastTransitionTime":"2026-02-14T04:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.459848 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.459878 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.459887 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.459902 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.459910 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:00Z","lastTransitionTime":"2026-02-14T04:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.562319 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.562368 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.562384 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.562402 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.562415 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:00Z","lastTransitionTime":"2026-02-14T04:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.664796 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.664841 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.664853 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.664870 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.664883 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:00Z","lastTransitionTime":"2026-02-14T04:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.766989 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.767040 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.767055 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.767075 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.767087 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:00Z","lastTransitionTime":"2026-02-14T04:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.868978 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.869015 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.869024 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.869039 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.869049 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:00Z","lastTransitionTime":"2026-02-14T04:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.971877 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.971948 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.971966 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.971992 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.972010 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:00Z","lastTransitionTime":"2026-02-14T04:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.996362 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.996415 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:00 crc kubenswrapper[4867]: E0214 04:10:00.996598 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:00 crc kubenswrapper[4867]: I0214 04:10:00.996629 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:00 crc kubenswrapper[4867]: E0214 04:10:00.996786 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:00 crc kubenswrapper[4867]: E0214 04:10:00.996924 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.075020 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.075062 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.075071 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.075084 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.075093 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:01Z","lastTransitionTime":"2026-02-14T04:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.095982 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 16:36:30.861279809 +0000 UTC Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.177543 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.177585 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.177594 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.177609 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.177629 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:01Z","lastTransitionTime":"2026-02-14T04:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.280317 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.280359 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.280369 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.280385 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.280396 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:01Z","lastTransitionTime":"2026-02-14T04:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.322976 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovnkube-controller/0.log" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.326140 4867 generic.go:334] "Generic (PLEG): container finished" podID="34391a30-5865-46e9-af5f-705cc3b11fba" containerID="32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8" exitCode=1 Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.326206 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerDied","Data":"32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8"} Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.326913 4867 scope.go:117] "RemoveContainer" containerID="32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.344264 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:01Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.358158 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:01Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.370770 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:01Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.382253 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:01Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.383235 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.383294 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.383321 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.383349 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.383372 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:01Z","lastTransitionTime":"2026-02-14T04:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.401015 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:00Z\\\",\\\"message\\\":\\\"AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 04:10:00.855869 6162 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:00.856816 6162 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 04:10:00.856832 6162 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 04:10:00.856855 6162 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 04:10:00.856860 6162 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 04:10:00.856872 6162 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 04:10:00.856879 6162 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 04:10:00.856887 6162 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:00.856891 6162 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 04:10:00.856931 6162 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:00.856956 6162 factory.go:656] Stopping watch factory\\\\nI0214 04:10:00.856960 6162 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 04:10:00.856969 6162 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:00.856978 6162 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:01Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.415755 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:01Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.428164 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:01Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.437737 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:01Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.448412 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:01Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.461824 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:01Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.473914 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:01Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.486714 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.486752 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.486761 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.486786 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.486796 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:01Z","lastTransitionTime":"2026-02-14T04:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.488543 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:01Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.500941 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:01Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.514469 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:01Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.589731 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.589767 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.589776 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.589791 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.589803 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:01Z","lastTransitionTime":"2026-02-14T04:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.691803 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.691853 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.691861 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.691875 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.691883 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:01Z","lastTransitionTime":"2026-02-14T04:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.793472 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.793529 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.793540 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.793554 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.793563 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:01Z","lastTransitionTime":"2026-02-14T04:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.896834 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.896887 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.896922 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.896943 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.896958 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:01Z","lastTransitionTime":"2026-02-14T04:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.999369 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.999403 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.999413 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.999424 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:01 crc kubenswrapper[4867]: I0214 04:10:01.999437 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:01Z","lastTransitionTime":"2026-02-14T04:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.083401 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.096338 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 15:16:55.290504014 +0000 UTC Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.096520 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.101126 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.101166 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.101178 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.101195 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.101207 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:02Z","lastTransitionTime":"2026-02-14T04:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.113765 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.131042 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.140712 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.152259 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.162364 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.174437 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.183154 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.193156 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.193189 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.193197 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.193209 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.193218 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:02Z","lastTransitionTime":"2026-02-14T04:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.194886 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: E0214 04:10:02.204535 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.207944 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.208010 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.208030 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.208063 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.208083 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:02Z","lastTransitionTime":"2026-02-14T04:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.213491 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:00Z\\\",\\\"message\\\":\\\"AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 04:10:00.855869 6162 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:00.856816 6162 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 04:10:00.856832 6162 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 04:10:00.856855 6162 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 04:10:00.856860 6162 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 04:10:00.856872 6162 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 04:10:00.856879 6162 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 04:10:00.856887 6162 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:00.856891 6162 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 04:10:00.856931 6162 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:00.856956 6162 factory.go:656] Stopping watch factory\\\\nI0214 04:10:00.856960 6162 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 04:10:00.856969 6162 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:00.856978 6162 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: E0214 04:10:02.220894 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.223908 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.224060 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.224092 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.224100 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.224116 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.224128 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:02Z","lastTransitionTime":"2026-02-14T04:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.234568 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: E0214 04:10:02.235163 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.238982 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.239013 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.239068 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.239084 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.239094 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:02Z","lastTransitionTime":"2026-02-14T04:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.250095 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: E0214 04:10:02.251358 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.254669 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.254699 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.254707 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.254721 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.254731 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:02Z","lastTransitionTime":"2026-02-14T04:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.266132 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: E0214 04:10:02.267751 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: E0214 04:10:02.268033 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.269764 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.269795 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.269808 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.269826 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.269841 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:02Z","lastTransitionTime":"2026-02-14T04:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.332805 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovnkube-controller/0.log" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.335831 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerStarted","Data":"6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648"} Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.335981 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.349094 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.359382 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.372284 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.372324 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.372334 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.372350 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.372363 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:02Z","lastTransitionTime":"2026-02-14T04:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.375778 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.398795 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:00Z\\\",\\\"message\\\":\\\"AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 04:10:00.855869 6162 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:00.856816 6162 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 04:10:00.856832 6162 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 04:10:00.856855 6162 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 04:10:00.856860 6162 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 04:10:00.856872 6162 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 04:10:00.856879 6162 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 04:10:00.856887 6162 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:00.856891 6162 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 04:10:00.856931 6162 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:00.856956 6162 factory.go:656] Stopping watch factory\\\\nI0214 04:10:00.856960 6162 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 04:10:00.856969 6162 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:00.856978 6162 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.409873 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.421542 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.433114 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.448822 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.463003 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.475931 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.475998 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.476020 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.476061 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.476087 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:02Z","lastTransitionTime":"2026-02-14T04:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.477379 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.492261 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.503018 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.513172 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.522660 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:02Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.578662 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.578696 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.578705 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.578722 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.578731 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:02Z","lastTransitionTime":"2026-02-14T04:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.680805 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.680836 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.680844 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.680858 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.680867 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:02Z","lastTransitionTime":"2026-02-14T04:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.782809 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.782847 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.782856 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.782871 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.782881 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:02Z","lastTransitionTime":"2026-02-14T04:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.885316 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.885377 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.885387 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.885402 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.885414 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:02Z","lastTransitionTime":"2026-02-14T04:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.987660 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.987712 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.987722 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.987734 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.987744 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:02Z","lastTransitionTime":"2026-02-14T04:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.996984 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.997155 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:02 crc kubenswrapper[4867]: I0214 04:10:02.997179 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:02 crc kubenswrapper[4867]: E0214 04:10:02.997278 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:02 crc kubenswrapper[4867]: E0214 04:10:02.997442 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:02 crc kubenswrapper[4867]: E0214 04:10:02.997584 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.091427 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.091476 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.091488 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.091528 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.091541 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:03Z","lastTransitionTime":"2026-02-14T04:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.096756 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 14:33:02.271878069 +0000 UTC Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.193812 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.193864 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.193877 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.193894 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.193906 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:03Z","lastTransitionTime":"2026-02-14T04:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.296623 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.296666 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.296678 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.296699 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.296712 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:03Z","lastTransitionTime":"2026-02-14T04:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.339822 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovnkube-controller/1.log" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.340873 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovnkube-controller/0.log" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.343795 4867 generic.go:334] "Generic (PLEG): container finished" podID="34391a30-5865-46e9-af5f-705cc3b11fba" containerID="6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648" exitCode=1 Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.343848 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerDied","Data":"6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648"} Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.343998 4867 scope.go:117] "RemoveContainer" containerID="32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.344793 4867 scope.go:117] "RemoveContainer" containerID="6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648" Feb 14 04:10:03 crc kubenswrapper[4867]: E0214 04:10:03.344961 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.358075 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.370413 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.386945 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:00Z\\\",\\\"message\\\":\\\"AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 04:10:00.855869 6162 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:00.856816 6162 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 04:10:00.856832 6162 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 04:10:00.856855 6162 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 04:10:00.856860 6162 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 04:10:00.856872 6162 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 04:10:00.856879 6162 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 04:10:00.856887 6162 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:00.856891 6162 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 04:10:00.856931 6162 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:00.856956 6162 factory.go:656] Stopping watch factory\\\\nI0214 04:10:00.856960 6162 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 04:10:00.856969 6162 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:00.856978 6162 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"cs-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc000627d57 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: ingress-operator,},ClusterIP:10.217.5.244,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.244],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0214 04:10:02.346564 6288 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to star\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.396464 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.398935 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.398989 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.399001 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.399019 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.399030 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:03Z","lastTransitionTime":"2026-02-14T04:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.408735 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.420372 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.430364 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.442958 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.445584 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr"] Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.445988 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.447588 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.448644 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.455754 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.469832 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.479672 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.490306 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.505002 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.505076 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.505095 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.505122 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.505139 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:03Z","lastTransitionTime":"2026-02-14T04:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.505114 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.519049 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.534631 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.545106 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.546554 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/05957e01-c589-4408-8f80-cd33f8856262-env-overrides\") pod \"ovnkube-control-plane-749d76644c-dbvwr\" (UID: \"05957e01-c589-4408-8f80-cd33f8856262\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.546599 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj65g\" (UniqueName: \"kubernetes.io/projected/05957e01-c589-4408-8f80-cd33f8856262-kube-api-access-nj65g\") pod \"ovnkube-control-plane-749d76644c-dbvwr\" (UID: \"05957e01-c589-4408-8f80-cd33f8856262\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.546628 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/05957e01-c589-4408-8f80-cd33f8856262-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-dbvwr\" (UID: \"05957e01-c589-4408-8f80-cd33f8856262\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.546681 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/05957e01-c589-4408-8f80-cd33f8856262-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-dbvwr\" (UID: \"05957e01-c589-4408-8f80-cd33f8856262\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.555946 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.566264 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.578759 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.589702 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.599174 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.607726 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.607766 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.607777 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.607794 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.607807 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:03Z","lastTransitionTime":"2026-02-14T04:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.613137 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.629414 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:00Z\\\",\\\"message\\\":\\\"AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 04:10:00.855869 6162 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:00.856816 6162 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 04:10:00.856832 6162 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 04:10:00.856855 6162 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 04:10:00.856860 6162 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 04:10:00.856872 6162 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 04:10:00.856879 6162 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 04:10:00.856887 6162 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:00.856891 6162 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 04:10:00.856931 6162 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:00.856956 6162 factory.go:656] Stopping watch factory\\\\nI0214 04:10:00.856960 6162 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 04:10:00.856969 6162 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:00.856978 6162 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"cs-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc000627d57 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: ingress-operator,},ClusterIP:10.217.5.244,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.244],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0214 04:10:02.346564 6288 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to star\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.638269 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.647585 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/05957e01-c589-4408-8f80-cd33f8856262-env-overrides\") pod \"ovnkube-control-plane-749d76644c-dbvwr\" (UID: \"05957e01-c589-4408-8f80-cd33f8856262\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.647630 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj65g\" (UniqueName: \"kubernetes.io/projected/05957e01-c589-4408-8f80-cd33f8856262-kube-api-access-nj65g\") pod \"ovnkube-control-plane-749d76644c-dbvwr\" (UID: \"05957e01-c589-4408-8f80-cd33f8856262\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.647675 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/05957e01-c589-4408-8f80-cd33f8856262-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-dbvwr\" (UID: \"05957e01-c589-4408-8f80-cd33f8856262\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.647703 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/05957e01-c589-4408-8f80-cd33f8856262-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-dbvwr\" (UID: \"05957e01-c589-4408-8f80-cd33f8856262\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.648213 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/05957e01-c589-4408-8f80-cd33f8856262-env-overrides\") pod \"ovnkube-control-plane-749d76644c-dbvwr\" (UID: \"05957e01-c589-4408-8f80-cd33f8856262\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.648875 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.648979 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/05957e01-c589-4408-8f80-cd33f8856262-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-dbvwr\" (UID: \"05957e01-c589-4408-8f80-cd33f8856262\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.653679 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/05957e01-c589-4408-8f80-cd33f8856262-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-dbvwr\" (UID: \"05957e01-c589-4408-8f80-cd33f8856262\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.657701 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.664078 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj65g\" (UniqueName: \"kubernetes.io/projected/05957e01-c589-4408-8f80-cd33f8856262-kube-api-access-nj65g\") pod \"ovnkube-control-plane-749d76644c-dbvwr\" (UID: \"05957e01-c589-4408-8f80-cd33f8856262\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.669716 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.680572 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.691809 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.711852 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.711893 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.711903 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.711917 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.711926 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:03Z","lastTransitionTime":"2026-02-14T04:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.756728 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" Feb 14 04:10:03 crc kubenswrapper[4867]: W0214 04:10:03.777275 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05957e01_c589_4408_8f80_cd33f8856262.slice/crio-70356d1768809ef207dc89091e759f7babb03f91a97749a7f29a652b275fede6 WatchSource:0}: Error finding container 70356d1768809ef207dc89091e759f7babb03f91a97749a7f29a652b275fede6: Status 404 returned error can't find the container with id 70356d1768809ef207dc89091e759f7babb03f91a97749a7f29a652b275fede6 Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.814486 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.814543 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.814557 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.814574 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.814584 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:03Z","lastTransitionTime":"2026-02-14T04:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.917106 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.917140 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.917147 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.917160 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:03 crc kubenswrapper[4867]: I0214 04:10:03.917169 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:03Z","lastTransitionTime":"2026-02-14T04:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.020225 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.020290 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.020311 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.020340 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.020359 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:04Z","lastTransitionTime":"2026-02-14T04:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.097286 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 08:50:03.345451093 +0000 UTC Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.122893 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.122925 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.122933 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.122946 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.122954 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:04Z","lastTransitionTime":"2026-02-14T04:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.225150 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.225188 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.225198 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.225214 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.225228 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:04Z","lastTransitionTime":"2026-02-14T04:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.328608 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.328646 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.328655 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.328669 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.328679 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:04Z","lastTransitionTime":"2026-02-14T04:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.347621 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovnkube-controller/1.log" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.351796 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" event={"ID":"05957e01-c589-4408-8f80-cd33f8856262","Type":"ContainerStarted","Data":"3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f"} Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.351844 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" event={"ID":"05957e01-c589-4408-8f80-cd33f8856262","Type":"ContainerStarted","Data":"9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847"} Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.351858 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" event={"ID":"05957e01-c589-4408-8f80-cd33f8856262","Type":"ContainerStarted","Data":"70356d1768809ef207dc89091e759f7babb03f91a97749a7f29a652b275fede6"} Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.366247 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.379716 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.393847 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.404985 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.417065 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.431253 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.431311 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.431324 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.431342 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.431356 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:04Z","lastTransitionTime":"2026-02-14T04:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.438960 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.453251 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.469194 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.481106 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.498796 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.516394 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:00Z\\\",\\\"message\\\":\\\"AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 04:10:00.855869 6162 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:00.856816 6162 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 04:10:00.856832 6162 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 04:10:00.856855 6162 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 04:10:00.856860 6162 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 04:10:00.856872 6162 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 04:10:00.856879 6162 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 04:10:00.856887 6162 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:00.856891 6162 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 04:10:00.856931 6162 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:00.856956 6162 factory.go:656] Stopping watch factory\\\\nI0214 04:10:00.856960 6162 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 04:10:00.856969 6162 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:00.856978 6162 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"cs-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc000627d57 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: ingress-operator,},ClusterIP:10.217.5.244,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.244],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0214 04:10:02.346564 6288 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to star\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.527238 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.533860 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.533910 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.533922 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.533943 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.533957 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:04Z","lastTransitionTime":"2026-02-14T04:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.539068 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.549113 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.563308 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.636315 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.636361 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.636373 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.636392 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.636404 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:04Z","lastTransitionTime":"2026-02-14T04:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.739375 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.739414 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.739424 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.739439 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.739448 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:04Z","lastTransitionTime":"2026-02-14T04:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.842380 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.842431 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.842446 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.842466 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.842479 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:04Z","lastTransitionTime":"2026-02-14T04:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.945466 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.945811 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.945928 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.946029 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.946118 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:04Z","lastTransitionTime":"2026-02-14T04:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.997258 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:04 crc kubenswrapper[4867]: E0214 04:10:04.997466 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.997275 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:04 crc kubenswrapper[4867]: I0214 04:10:04.997908 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:04 crc kubenswrapper[4867]: E0214 04:10:04.997995 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:04 crc kubenswrapper[4867]: E0214 04:10:04.998279 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.050111 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.050197 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.050223 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.050258 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.050281 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:05Z","lastTransitionTime":"2026-02-14T04:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.097725 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 05:40:00.386787958 +0000 UTC Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.153395 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.153449 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.153465 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.153489 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.153544 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:05Z","lastTransitionTime":"2026-02-14T04:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.256973 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.257045 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.257063 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.257096 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.257115 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:05Z","lastTransitionTime":"2026-02-14T04:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.280284 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-4b6k5"] Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.281168 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:05 crc kubenswrapper[4867]: E0214 04:10:05.281307 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.305387 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:05Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.325149 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:05Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.342599 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:05Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.359709 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.359749 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.359761 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.359778 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.359790 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:05Z","lastTransitionTime":"2026-02-14T04:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.365782 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:05Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.389289 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:05Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.410045 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:05Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.427182 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7206174b-645b-4924-8345-d1d4b1a5ec39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b6k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:05Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.446769 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:05Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.460530 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:05Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.462839 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.462970 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.463088 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.463175 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.463260 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:05Z","lastTransitionTime":"2026-02-14T04:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.465908 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-272vg\" (UniqueName: \"kubernetes.io/projected/7206174b-645b-4924-8345-d1d4b1a5ec39-kube-api-access-272vg\") pod \"network-metrics-daemon-4b6k5\" (UID: \"7206174b-645b-4924-8345-d1d4b1a5ec39\") " pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.465953 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs\") pod \"network-metrics-daemon-4b6k5\" (UID: \"7206174b-645b-4924-8345-d1d4b1a5ec39\") " pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.487249 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:05Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.507962 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:05Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.527372 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:05Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.542247 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:05Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.558688 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:05Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.565961 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.566168 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.566286 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.566388 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.566477 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:05Z","lastTransitionTime":"2026-02-14T04:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.566560 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-272vg\" (UniqueName: \"kubernetes.io/projected/7206174b-645b-4924-8345-d1d4b1a5ec39-kube-api-access-272vg\") pod \"network-metrics-daemon-4b6k5\" (UID: \"7206174b-645b-4924-8345-d1d4b1a5ec39\") " pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.566801 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs\") pod \"network-metrics-daemon-4b6k5\" (UID: \"7206174b-645b-4924-8345-d1d4b1a5ec39\") " pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:05 crc kubenswrapper[4867]: E0214 04:10:05.566956 4867 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 04:10:05 crc kubenswrapper[4867]: E0214 04:10:05.567051 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs podName:7206174b-645b-4924-8345-d1d4b1a5ec39 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:06.067028504 +0000 UTC m=+38.147965838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs") pod "network-metrics-daemon-4b6k5" (UID: "7206174b-645b-4924-8345-d1d4b1a5ec39") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.587051 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:00Z\\\",\\\"message\\\":\\\"AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 04:10:00.855869 6162 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:00.856816 6162 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 04:10:00.856832 6162 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 04:10:00.856855 6162 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 04:10:00.856860 6162 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 04:10:00.856872 6162 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 04:10:00.856879 6162 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 04:10:00.856887 6162 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:00.856891 6162 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 04:10:00.856931 6162 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:00.856956 6162 factory.go:656] Stopping watch factory\\\\nI0214 04:10:00.856960 6162 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 04:10:00.856969 6162 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:00.856978 6162 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"cs-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc000627d57 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: ingress-operator,},ClusterIP:10.217.5.244,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.244],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0214 04:10:02.346564 6288 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to star\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:05Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.588069 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-272vg\" (UniqueName: \"kubernetes.io/projected/7206174b-645b-4924-8345-d1d4b1a5ec39-kube-api-access-272vg\") pod \"network-metrics-daemon-4b6k5\" (UID: \"7206174b-645b-4924-8345-d1d4b1a5ec39\") " pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.603298 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:05Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.669614 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.669658 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.669710 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.669730 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.669743 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:05Z","lastTransitionTime":"2026-02-14T04:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.772791 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.772866 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.772884 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.772912 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.772944 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:05Z","lastTransitionTime":"2026-02-14T04:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.875960 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.876241 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.876368 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.876496 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.876785 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:05Z","lastTransitionTime":"2026-02-14T04:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.980674 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.980749 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.980762 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.980788 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:05 crc kubenswrapper[4867]: I0214 04:10:05.980808 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:05Z","lastTransitionTime":"2026-02-14T04:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.072975 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs\") pod \"network-metrics-daemon-4b6k5\" (UID: \"7206174b-645b-4924-8345-d1d4b1a5ec39\") " pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.073325 4867 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.073562 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs podName:7206174b-645b-4924-8345-d1d4b1a5ec39 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:07.073471127 +0000 UTC m=+39.154408461 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs") pod "network-metrics-daemon-4b6k5" (UID: "7206174b-645b-4924-8345-d1d4b1a5ec39") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.083940 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.084001 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.084024 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.084056 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.084082 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:06Z","lastTransitionTime":"2026-02-14T04:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.098241 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 02:09:24.748973193 +0000 UTC Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.186863 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.186936 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.186950 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.186969 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.186980 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:06Z","lastTransitionTime":"2026-02-14T04:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.289453 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.289543 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.289566 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.289596 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.289619 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:06Z","lastTransitionTime":"2026-02-14T04:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.393638 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.393713 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.393738 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.393804 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.393823 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:06Z","lastTransitionTime":"2026-02-14T04:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.497471 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.497608 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.497632 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.497666 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.497710 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:06Z","lastTransitionTime":"2026-02-14T04:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.601757 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.601965 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.602027 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.602087 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.602174 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:06Z","lastTransitionTime":"2026-02-14T04:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.705206 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.705480 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.705622 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.705732 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.705808 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:06Z","lastTransitionTime":"2026-02-14T04:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.809022 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.809073 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.809084 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.809109 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.809123 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:06Z","lastTransitionTime":"2026-02-14T04:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.882356 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.882650 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:10:22.882613735 +0000 UTC m=+54.963551059 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.882773 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.882861 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.882966 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.883028 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.883142 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.883173 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.883198 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.883212 4867 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.883226 4867 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.883217 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.883316 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:22.883287123 +0000 UTC m=+54.964224477 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.883347 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:22.883334835 +0000 UTC m=+54.964272189 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.883323 4867 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.883170 4867 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.883474 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:22.883455478 +0000 UTC m=+54.964393032 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.883542 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:22.883489739 +0000 UTC m=+54.964427243 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.912218 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.912266 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.912280 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.912300 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.912315 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:06Z","lastTransitionTime":"2026-02-14T04:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.997045 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.997176 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.997247 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.997307 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:06 crc kubenswrapper[4867]: I0214 04:10:06.997055 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.997480 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.997592 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:06 crc kubenswrapper[4867]: E0214 04:10:06.997696 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.016042 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.016110 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.016136 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.016172 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.016197 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:07Z","lastTransitionTime":"2026-02-14T04:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.085580 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs\") pod \"network-metrics-daemon-4b6k5\" (UID: \"7206174b-645b-4924-8345-d1d4b1a5ec39\") " pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:07 crc kubenswrapper[4867]: E0214 04:10:07.085776 4867 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 04:10:07 crc kubenswrapper[4867]: E0214 04:10:07.085895 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs podName:7206174b-645b-4924-8345-d1d4b1a5ec39 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:09.085864123 +0000 UTC m=+41.166801477 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs") pod "network-metrics-daemon-4b6k5" (UID: "7206174b-645b-4924-8345-d1d4b1a5ec39") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.099463 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 12:07:18.054801582 +0000 UTC Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.118925 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.119218 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.119292 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.119376 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.119453 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:07Z","lastTransitionTime":"2026-02-14T04:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.221864 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.221912 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.221924 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.221941 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.221954 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:07Z","lastTransitionTime":"2026-02-14T04:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.324649 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.324686 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.324696 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.324713 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.324726 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:07Z","lastTransitionTime":"2026-02-14T04:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.426713 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.426771 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.426789 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.426813 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.426832 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:07Z","lastTransitionTime":"2026-02-14T04:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.529706 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.530020 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.530112 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.530198 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.530329 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:07Z","lastTransitionTime":"2026-02-14T04:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.632580 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.632619 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.632630 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.632648 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.632658 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:07Z","lastTransitionTime":"2026-02-14T04:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.736023 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.736128 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.736183 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.736213 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.736273 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:07Z","lastTransitionTime":"2026-02-14T04:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.839233 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.839277 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.839292 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.839311 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.839325 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:07Z","lastTransitionTime":"2026-02-14T04:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.942058 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.942400 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.942412 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.942431 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:07 crc kubenswrapper[4867]: I0214 04:10:07.942443 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:07Z","lastTransitionTime":"2026-02-14T04:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.045043 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.045114 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.045126 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.045144 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.045162 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:08Z","lastTransitionTime":"2026-02-14T04:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.099925 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 11:36:24.211384726 +0000 UTC Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.148413 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.148479 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.148541 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.148572 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.148593 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:08Z","lastTransitionTime":"2026-02-14T04:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.252682 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.252761 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.252780 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.252807 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.252826 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:08Z","lastTransitionTime":"2026-02-14T04:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.357113 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.357179 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.357198 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.357230 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.357252 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:08Z","lastTransitionTime":"2026-02-14T04:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.460376 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.460445 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.460462 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.460493 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.460548 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:08Z","lastTransitionTime":"2026-02-14T04:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.563771 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.563867 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.563890 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.563924 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.563983 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:08Z","lastTransitionTime":"2026-02-14T04:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.668381 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.668452 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.668469 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.668497 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.668550 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:08Z","lastTransitionTime":"2026-02-14T04:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.771588 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.771643 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.771654 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.771676 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.771691 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:08Z","lastTransitionTime":"2026-02-14T04:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.874854 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.874900 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.874908 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.874923 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.874934 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:08Z","lastTransitionTime":"2026-02-14T04:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.978556 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.978617 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.978630 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.978649 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.978663 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:08Z","lastTransitionTime":"2026-02-14T04:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.996915 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.997009 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.997050 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:08 crc kubenswrapper[4867]: I0214 04:10:08.997096 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:08 crc kubenswrapper[4867]: E0214 04:10:08.997702 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:08 crc kubenswrapper[4867]: E0214 04:10:08.997854 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:08 crc kubenswrapper[4867]: E0214 04:10:08.997998 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:08 crc kubenswrapper[4867]: E0214 04:10:08.998051 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.018999 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.034006 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.052117 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.073352 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.080369 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.080409 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.080424 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.080443 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.080459 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:09Z","lastTransitionTime":"2026-02-14T04:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.094791 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.100649 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 08:01:09.390211817 +0000 UTC Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.108040 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs\") pod \"network-metrics-daemon-4b6k5\" (UID: \"7206174b-645b-4924-8345-d1d4b1a5ec39\") " pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:09 crc kubenswrapper[4867]: E0214 04:10:09.108147 4867 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 04:10:09 crc kubenswrapper[4867]: E0214 04:10:09.108186 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs podName:7206174b-645b-4924-8345-d1d4b1a5ec39 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:13.108173708 +0000 UTC m=+45.189111022 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs") pod "network-metrics-daemon-4b6k5" (UID: "7206174b-645b-4924-8345-d1d4b1a5ec39") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.110096 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.144052 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:00Z\\\",\\\"message\\\":\\\"AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 04:10:00.855869 6162 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:00.856816 6162 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 04:10:00.856832 6162 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 04:10:00.856855 6162 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 04:10:00.856860 6162 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 04:10:00.856872 6162 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 04:10:00.856879 6162 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 04:10:00.856887 6162 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:00.856891 6162 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 04:10:00.856931 6162 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:00.856956 6162 factory.go:656] Stopping watch factory\\\\nI0214 04:10:00.856960 6162 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 04:10:00.856969 6162 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:00.856978 6162 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"cs-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc000627d57 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: ingress-operator,},ClusterIP:10.217.5.244,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.244],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0214 04:10:02.346564 6288 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to star\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.158484 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.174104 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.184093 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.184183 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.184204 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.184239 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.184264 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:09Z","lastTransitionTime":"2026-02-14T04:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.194277 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.210348 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.228283 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.249030 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.266276 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.279343 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.288844 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.288913 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.288932 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.288959 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.288977 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:09Z","lastTransitionTime":"2026-02-14T04:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.294582 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7206174b-645b-4924-8345-d1d4b1a5ec39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b6k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.392715 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.393365 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.393625 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.393835 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.394058 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:09Z","lastTransitionTime":"2026-02-14T04:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.498850 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.498927 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.498951 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.498982 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.499002 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:09Z","lastTransitionTime":"2026-02-14T04:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.602011 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.602061 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.602071 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.602087 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.602097 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:09Z","lastTransitionTime":"2026-02-14T04:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.704327 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.704365 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.704378 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.704395 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.704407 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:09Z","lastTransitionTime":"2026-02-14T04:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.807329 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.807382 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.807392 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.807407 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.807418 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:09Z","lastTransitionTime":"2026-02-14T04:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.910250 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.910286 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.910295 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.910307 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:09 crc kubenswrapper[4867]: I0214 04:10:09.910316 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:09Z","lastTransitionTime":"2026-02-14T04:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.013620 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.013672 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.013688 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.013711 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.013729 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:10Z","lastTransitionTime":"2026-02-14T04:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.101445 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 18:56:21.266624415 +0000 UTC Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.116933 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.117005 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.117022 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.117046 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.117064 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:10Z","lastTransitionTime":"2026-02-14T04:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.219777 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.219836 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.219851 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.219876 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.219893 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:10Z","lastTransitionTime":"2026-02-14T04:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.322216 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.322275 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.322293 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.322320 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.322341 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:10Z","lastTransitionTime":"2026-02-14T04:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.425570 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.425614 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.425625 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.425649 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.425666 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:10Z","lastTransitionTime":"2026-02-14T04:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.528461 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.528556 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.528570 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.528586 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.528598 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:10Z","lastTransitionTime":"2026-02-14T04:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.631733 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.631795 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.631814 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.631837 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.631854 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:10Z","lastTransitionTime":"2026-02-14T04:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.735151 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.735197 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.735205 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.735219 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.735228 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:10Z","lastTransitionTime":"2026-02-14T04:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.837687 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.837774 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.837798 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.837827 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.837850 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:10Z","lastTransitionTime":"2026-02-14T04:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.940290 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.940323 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.940331 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.940345 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.940354 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:10Z","lastTransitionTime":"2026-02-14T04:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.996885 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.996978 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:10 crc kubenswrapper[4867]: E0214 04:10:10.997024 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.997116 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:10 crc kubenswrapper[4867]: E0214 04:10:10.997190 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:10 crc kubenswrapper[4867]: I0214 04:10:10.997215 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:10 crc kubenswrapper[4867]: E0214 04:10:10.997374 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:10 crc kubenswrapper[4867]: E0214 04:10:10.997559 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.043150 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.043210 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.043235 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.043266 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.043294 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:11Z","lastTransitionTime":"2026-02-14T04:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.102466 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 17:01:13.135265482 +0000 UTC Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.146325 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.146365 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.146380 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.146396 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.146408 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:11Z","lastTransitionTime":"2026-02-14T04:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.249293 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.249352 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.249363 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.249378 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.249389 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:11Z","lastTransitionTime":"2026-02-14T04:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.355425 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.355483 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.355523 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.355543 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.355557 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:11Z","lastTransitionTime":"2026-02-14T04:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.458003 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.458032 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.458040 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.458054 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.458062 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:11Z","lastTransitionTime":"2026-02-14T04:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.560764 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.560806 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.560814 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.560830 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.560839 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:11Z","lastTransitionTime":"2026-02-14T04:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.662923 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.662954 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.662963 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.662976 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.662985 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:11Z","lastTransitionTime":"2026-02-14T04:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.764733 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.764767 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.764775 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.764791 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.764800 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:11Z","lastTransitionTime":"2026-02-14T04:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.866849 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.866887 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.866900 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.866915 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.866925 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:11Z","lastTransitionTime":"2026-02-14T04:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.969074 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.969107 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.969117 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.969131 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:11 crc kubenswrapper[4867]: I0214 04:10:11.969139 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:11Z","lastTransitionTime":"2026-02-14T04:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.071875 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.071913 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.071921 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.071934 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.071944 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:12Z","lastTransitionTime":"2026-02-14T04:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.102878 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 01:55:00.290882013 +0000 UTC Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.174018 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.174056 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.174066 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.174080 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.174090 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:12Z","lastTransitionTime":"2026-02-14T04:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.276395 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.276432 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.276466 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.276482 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.276490 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:12Z","lastTransitionTime":"2026-02-14T04:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.378998 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.379028 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.379037 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.379051 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.379059 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:12Z","lastTransitionTime":"2026-02-14T04:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.440812 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.440848 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.440861 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.440876 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.440887 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:12Z","lastTransitionTime":"2026-02-14T04:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:12 crc kubenswrapper[4867]: E0214 04:10:12.460483 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:12Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.464365 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.464422 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.464438 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.464460 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.464474 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:12Z","lastTransitionTime":"2026-02-14T04:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:12 crc kubenswrapper[4867]: E0214 04:10:12.476595 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:12Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.480563 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.480602 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.480611 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.480627 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.480638 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:12Z","lastTransitionTime":"2026-02-14T04:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:12 crc kubenswrapper[4867]: E0214 04:10:12.493185 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:12Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.497794 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.497824 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.497833 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.497846 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.497855 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:12Z","lastTransitionTime":"2026-02-14T04:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:12 crc kubenswrapper[4867]: E0214 04:10:12.508220 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:12Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.511151 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.511195 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.511204 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.511220 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.511231 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:12Z","lastTransitionTime":"2026-02-14T04:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:12 crc kubenswrapper[4867]: E0214 04:10:12.521891 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:12Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:12 crc kubenswrapper[4867]: E0214 04:10:12.522045 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.523480 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.523536 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.523547 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.523562 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.523573 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:12Z","lastTransitionTime":"2026-02-14T04:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.625867 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.625910 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.625924 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.625941 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.625951 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:12Z","lastTransitionTime":"2026-02-14T04:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.727938 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.727978 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.727989 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.728005 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.728018 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:12Z","lastTransitionTime":"2026-02-14T04:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.830242 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.831419 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.831447 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.831465 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.831474 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:12Z","lastTransitionTime":"2026-02-14T04:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.934125 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.934162 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.934174 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.934193 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.934206 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:12Z","lastTransitionTime":"2026-02-14T04:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.996289 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.996350 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.996422 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:12 crc kubenswrapper[4867]: E0214 04:10:12.996415 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:12 crc kubenswrapper[4867]: I0214 04:10:12.996444 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:12 crc kubenswrapper[4867]: E0214 04:10:12.996569 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:12 crc kubenswrapper[4867]: E0214 04:10:12.996649 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:12 crc kubenswrapper[4867]: E0214 04:10:12.996747 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.036101 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.036135 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.036146 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.036162 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.036174 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:13Z","lastTransitionTime":"2026-02-14T04:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.103016 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 10:42:58.259945215 +0000 UTC Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.138446 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.138479 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.138487 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.138500 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.138520 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:13Z","lastTransitionTime":"2026-02-14T04:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.157030 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs\") pod \"network-metrics-daemon-4b6k5\" (UID: \"7206174b-645b-4924-8345-d1d4b1a5ec39\") " pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:13 crc kubenswrapper[4867]: E0214 04:10:13.157161 4867 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 04:10:13 crc kubenswrapper[4867]: E0214 04:10:13.157212 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs podName:7206174b-645b-4924-8345-d1d4b1a5ec39 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:21.157196485 +0000 UTC m=+53.238133799 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs") pod "network-metrics-daemon-4b6k5" (UID: "7206174b-645b-4924-8345-d1d4b1a5ec39") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.240973 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.241009 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.241018 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.241031 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.241042 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:13Z","lastTransitionTime":"2026-02-14T04:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.343599 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.343624 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.343633 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.343646 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.343655 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:13Z","lastTransitionTime":"2026-02-14T04:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.446552 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.446645 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.446662 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.446685 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.446702 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:13Z","lastTransitionTime":"2026-02-14T04:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.549948 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.549989 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.549999 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.550017 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.550028 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:13Z","lastTransitionTime":"2026-02-14T04:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.652908 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.652942 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.652951 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.652963 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.652972 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:13Z","lastTransitionTime":"2026-02-14T04:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.756191 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.756238 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.756249 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.756268 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.756278 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:13Z","lastTransitionTime":"2026-02-14T04:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.859021 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.859052 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.859060 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.859073 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.859082 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:13Z","lastTransitionTime":"2026-02-14T04:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.962441 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.962500 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.962595 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.962626 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:13 crc kubenswrapper[4867]: I0214 04:10:13.962643 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:13Z","lastTransitionTime":"2026-02-14T04:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.065277 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.065319 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.065328 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.065342 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.065352 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:14Z","lastTransitionTime":"2026-02-14T04:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.103263 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 11:44:44.538939343 +0000 UTC Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.167622 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.167689 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.167712 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.167744 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.167768 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:14Z","lastTransitionTime":"2026-02-14T04:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.270429 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.270500 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.270567 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.270598 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.270622 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:14Z","lastTransitionTime":"2026-02-14T04:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.373185 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.373244 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.373262 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.373289 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.373307 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:14Z","lastTransitionTime":"2026-02-14T04:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.476264 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.476340 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.476356 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.476380 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.476397 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:14Z","lastTransitionTime":"2026-02-14T04:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.579707 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.579803 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.579827 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.579854 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.579871 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:14Z","lastTransitionTime":"2026-02-14T04:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.682778 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.682856 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.682875 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.682901 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.682919 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:14Z","lastTransitionTime":"2026-02-14T04:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.785454 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.785565 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.785600 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.785632 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.785655 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:14Z","lastTransitionTime":"2026-02-14T04:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.888832 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.888895 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.888926 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.888957 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.888978 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:14Z","lastTransitionTime":"2026-02-14T04:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.991401 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.991433 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.991444 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.991458 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.991467 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:14Z","lastTransitionTime":"2026-02-14T04:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.997170 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:14 crc kubenswrapper[4867]: E0214 04:10:14.997419 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.997727 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.997808 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:14 crc kubenswrapper[4867]: E0214 04:10:14.997900 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:14 crc kubenswrapper[4867]: I0214 04:10:14.997756 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:14 crc kubenswrapper[4867]: E0214 04:10:14.998004 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:14 crc kubenswrapper[4867]: E0214 04:10:14.998055 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.094103 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.094149 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.094157 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.094172 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.094182 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:15Z","lastTransitionTime":"2026-02-14T04:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.103475 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 10:59:59.174271026 +0000 UTC Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.196096 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.196336 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.196428 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.196533 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.196627 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:15Z","lastTransitionTime":"2026-02-14T04:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.299385 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.299428 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.299442 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.299459 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.299471 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:15Z","lastTransitionTime":"2026-02-14T04:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.401700 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.401751 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.401763 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.401780 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.401788 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:15Z","lastTransitionTime":"2026-02-14T04:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.504393 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.504438 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.504455 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.504478 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.504495 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:15Z","lastTransitionTime":"2026-02-14T04:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.607671 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.607759 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.607781 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.607808 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.607827 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:15Z","lastTransitionTime":"2026-02-14T04:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.711027 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.711097 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.711113 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.711137 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.711155 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:15Z","lastTransitionTime":"2026-02-14T04:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.814044 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.814103 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.814115 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.814131 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.814141 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:15Z","lastTransitionTime":"2026-02-14T04:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.915959 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.916005 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.916015 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.916029 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:15 crc kubenswrapper[4867]: I0214 04:10:15.916039 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:15Z","lastTransitionTime":"2026-02-14T04:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.018467 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.018563 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.018581 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.018603 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.018619 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:16Z","lastTransitionTime":"2026-02-14T04:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.104619 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 10:07:47.246874983 +0000 UTC Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.121327 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.121373 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.121380 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.121395 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.121403 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:16Z","lastTransitionTime":"2026-02-14T04:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.224080 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.224145 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.224169 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.224197 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.224219 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:16Z","lastTransitionTime":"2026-02-14T04:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.327543 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.327610 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.327627 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.327650 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.327668 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:16Z","lastTransitionTime":"2026-02-14T04:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.429944 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.429988 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.430003 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.430022 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.430035 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:16Z","lastTransitionTime":"2026-02-14T04:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.532484 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.532567 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.532617 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.532635 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.532645 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:16Z","lastTransitionTime":"2026-02-14T04:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.634460 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.634498 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.634525 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.634539 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.634548 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:16Z","lastTransitionTime":"2026-02-14T04:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.736942 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.736985 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.736993 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.737010 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.737024 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:16Z","lastTransitionTime":"2026-02-14T04:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.839659 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.839712 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.839726 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.839748 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.839764 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:16Z","lastTransitionTime":"2026-02-14T04:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.942548 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.942583 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.942591 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.942603 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.942621 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:16Z","lastTransitionTime":"2026-02-14T04:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.996405 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:16 crc kubenswrapper[4867]: E0214 04:10:16.996526 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.996618 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.996618 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:16 crc kubenswrapper[4867]: E0214 04:10:16.996795 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:16 crc kubenswrapper[4867]: I0214 04:10:16.996638 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:16 crc kubenswrapper[4867]: E0214 04:10:16.996972 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:16 crc kubenswrapper[4867]: E0214 04:10:16.997023 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.044827 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.044882 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.044899 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.044923 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.044941 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:17Z","lastTransitionTime":"2026-02-14T04:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.105040 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 19:48:16.587855307 +0000 UTC Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.147073 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.147115 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.147125 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.147140 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.147150 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:17Z","lastTransitionTime":"2026-02-14T04:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.250106 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.250153 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.250169 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.250192 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.250209 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:17Z","lastTransitionTime":"2026-02-14T04:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.352646 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.352709 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.352728 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.352755 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.352774 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:17Z","lastTransitionTime":"2026-02-14T04:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.455654 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.455746 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.455766 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.455792 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.455812 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:17Z","lastTransitionTime":"2026-02-14T04:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.558756 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.558889 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.558943 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.558968 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.558984 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:17Z","lastTransitionTime":"2026-02-14T04:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.662491 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.662592 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.662617 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.662650 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.662673 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:17Z","lastTransitionTime":"2026-02-14T04:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.766150 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.766203 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.766219 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.766242 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.766259 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:17Z","lastTransitionTime":"2026-02-14T04:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.869598 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.869662 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.869681 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.869706 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.869726 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:17Z","lastTransitionTime":"2026-02-14T04:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.972681 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.972750 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.972772 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.972803 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:17 crc kubenswrapper[4867]: I0214 04:10:17.972825 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:17Z","lastTransitionTime":"2026-02-14T04:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.075464 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.075546 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.075566 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.075590 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.075606 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:18Z","lastTransitionTime":"2026-02-14T04:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.105262 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 08:34:43.543143394 +0000 UTC Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.178088 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.178125 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.178135 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.178148 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.178158 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:18Z","lastTransitionTime":"2026-02-14T04:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.281093 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.281142 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.281152 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.281166 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.281175 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:18Z","lastTransitionTime":"2026-02-14T04:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.383701 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.383744 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.383754 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.383773 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.383784 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:18Z","lastTransitionTime":"2026-02-14T04:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.486316 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.486376 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.486400 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.486428 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.486450 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:18Z","lastTransitionTime":"2026-02-14T04:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.589455 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.589560 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.589586 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.589616 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.589656 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:18Z","lastTransitionTime":"2026-02-14T04:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.692737 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.692803 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.692821 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.692847 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.692865 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:18Z","lastTransitionTime":"2026-02-14T04:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.796169 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.796221 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.796238 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.796261 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.796280 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:18Z","lastTransitionTime":"2026-02-14T04:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.898823 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.898898 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.898917 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.898943 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.898961 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:18Z","lastTransitionTime":"2026-02-14T04:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.996316 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.996359 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.996639 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.996675 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:18 crc kubenswrapper[4867]: E0214 04:10:18.996914 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:18 crc kubenswrapper[4867]: I0214 04:10:18.996988 4867 scope.go:117] "RemoveContainer" containerID="6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648" Feb 14 04:10:18 crc kubenswrapper[4867]: E0214 04:10:18.997278 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:18 crc kubenswrapper[4867]: E0214 04:10:18.997436 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:18 crc kubenswrapper[4867]: E0214 04:10:18.997578 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.001955 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.002013 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.002031 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.002055 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.002072 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:19Z","lastTransitionTime":"2026-02-14T04:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.034295 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://32de50fe13796a05a11d846751a0d9aba8dcf9dcde8086c0eb90b5dc685c6ef8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:00Z\\\",\\\"message\\\":\\\"AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0214 04:10:00.855869 6162 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:00.856816 6162 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0214 04:10:00.856832 6162 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0214 04:10:00.856855 6162 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0214 04:10:00.856860 6162 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0214 04:10:00.856872 6162 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0214 04:10:00.856879 6162 handler.go:208] Removed *v1.Node event handler 7\\\\nI0214 04:10:00.856887 6162 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:00.856891 6162 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0214 04:10:00.856931 6162 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:00.856956 6162 factory.go:656] Stopping watch factory\\\\nI0214 04:10:00.856960 6162 handler.go:208] Removed *v1.Node event handler 2\\\\nI0214 04:10:00.856969 6162 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:00.856978 6162 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"cs-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc000627d57 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: ingress-operator,},ClusterIP:10.217.5.244,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.244],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0214 04:10:02.346564 6288 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to star\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.049328 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.066789 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.082460 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.099401 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.104313 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.104692 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.104703 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.104717 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.104729 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:19Z","lastTransitionTime":"2026-02-14T04:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.105404 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 15:03:49.409035308 +0000 UTC Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.116368 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.132994 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.147137 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.161856 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.174363 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7206174b-645b-4924-8345-d1d4b1a5ec39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b6k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.190416 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.208011 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.208064 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.208082 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.208101 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.208114 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:19Z","lastTransitionTime":"2026-02-14T04:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.208076 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.222634 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.237853 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.252294 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.266489 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.281810 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.298096 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.310438 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.310554 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.310573 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.310599 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.310620 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:19Z","lastTransitionTime":"2026-02-14T04:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.312543 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.328823 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.344257 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.356462 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.370355 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.398790 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"cs-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc000627d57 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: ingress-operator,},ClusterIP:10.217.5.244,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.244],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0214 04:10:02.346564 6288 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to star\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.408809 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovnkube-controller/1.log" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.411781 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.412959 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.413000 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.413011 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.413031 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.413043 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:19Z","lastTransitionTime":"2026-02-14T04:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.413754 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerStarted","Data":"901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93"} Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.425299 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.445271 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.465351 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.478661 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.493988 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.508283 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.516204 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.516272 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.516289 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.516310 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.516325 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:19Z","lastTransitionTime":"2026-02-14T04:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.524516 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7206174b-645b-4924-8345-d1d4b1a5ec39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b6k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:19Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.618095 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.618146 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.618158 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.618176 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.618190 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:19Z","lastTransitionTime":"2026-02-14T04:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.720687 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.720726 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.720735 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.720751 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.720760 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:19Z","lastTransitionTime":"2026-02-14T04:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.823400 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.824106 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.824139 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.824162 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.824174 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:19Z","lastTransitionTime":"2026-02-14T04:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.926459 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.926727 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.926797 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.926861 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:19 crc kubenswrapper[4867]: I0214 04:10:19.926932 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:19Z","lastTransitionTime":"2026-02-14T04:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.030061 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.030097 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.030110 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.030129 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.030144 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:20Z","lastTransitionTime":"2026-02-14T04:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.106093 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 21:59:20.152911364 +0000 UTC Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.132746 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.132779 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.132791 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.132809 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.132818 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:20Z","lastTransitionTime":"2026-02-14T04:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.235025 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.235075 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.235084 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.235095 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.235103 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:20Z","lastTransitionTime":"2026-02-14T04:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.337795 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.337830 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.337842 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.337857 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.337866 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:20Z","lastTransitionTime":"2026-02-14T04:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.418555 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovnkube-controller/2.log" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.419644 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovnkube-controller/1.log" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.422180 4867 generic.go:334] "Generic (PLEG): container finished" podID="34391a30-5865-46e9-af5f-705cc3b11fba" containerID="901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93" exitCode=1 Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.422212 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerDied","Data":"901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93"} Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.422248 4867 scope.go:117] "RemoveContainer" containerID="6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.423542 4867 scope.go:117] "RemoveContainer" containerID="901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93" Feb 14 04:10:20 crc kubenswrapper[4867]: E0214 04:10:20.423813 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.440205 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.440230 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.440238 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.440252 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.440262 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:20Z","lastTransitionTime":"2026-02-14T04:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.484192 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:20Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.506885 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:20Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.526448 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:20Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.539240 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:20Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.543115 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.543164 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.543175 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.543194 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.543210 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:20Z","lastTransitionTime":"2026-02-14T04:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.567851 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6275ed33b70a915b2624ce3be264cf800be4656505fbb478cda0c95a4e486648\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"message\\\":\\\"cs-tls service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc000627d57 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:metrics,Protocol:TCP,Port:9393,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{name: ingress-operator,},ClusterIP:10.217.5.244,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.244],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0214 04:10:02.346564 6288 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to star\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:20Z\\\",\\\"message\\\":\\\".go:160\\\\nI0214 04:10:20.016141 6511 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016279 6511 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016402 6511 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 04:10:20.016537 6511 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 04:10:20.016835 6511 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:20.016886 6511 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:20.016916 6511 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 04:10:20.016948 6511 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 04:10:20.016968 6511 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:20.016972 6511 factory.go:656] Stopping watch factory\\\\nI0214 04:10:20.016992 6511 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:20Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.584134 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:20Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.603392 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:20Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.615772 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:20Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.630634 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:20Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.646058 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.646103 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.646115 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.646133 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.646147 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:20Z","lastTransitionTime":"2026-02-14T04:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.646641 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:20Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.664289 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:20Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.679164 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:20Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.692910 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:20Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.707903 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7206174b-645b-4924-8345-d1d4b1a5ec39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b6k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:20Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.722775 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:20Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.742310 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:20Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.749213 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.749246 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.749258 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.749277 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.749288 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:20Z","lastTransitionTime":"2026-02-14T04:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.851675 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.851722 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.851734 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.851753 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.851766 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:20Z","lastTransitionTime":"2026-02-14T04:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.955066 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.955142 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.955155 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.955172 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.955187 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:20Z","lastTransitionTime":"2026-02-14T04:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.996813 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.996906 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.996840 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:20 crc kubenswrapper[4867]: I0214 04:10:20.996822 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:20 crc kubenswrapper[4867]: E0214 04:10:20.997020 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:20 crc kubenswrapper[4867]: E0214 04:10:20.997238 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:20 crc kubenswrapper[4867]: E0214 04:10:20.997382 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:20 crc kubenswrapper[4867]: E0214 04:10:20.997663 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.058354 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.058416 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.058433 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.058458 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.058476 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:21Z","lastTransitionTime":"2026-02-14T04:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.106781 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 21:15:43.3808039 +0000 UTC Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.161699 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.161840 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.161936 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.162023 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.162053 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:21Z","lastTransitionTime":"2026-02-14T04:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.240928 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs\") pod \"network-metrics-daemon-4b6k5\" (UID: \"7206174b-645b-4924-8345-d1d4b1a5ec39\") " pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:21 crc kubenswrapper[4867]: E0214 04:10:21.241142 4867 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 04:10:21 crc kubenswrapper[4867]: E0214 04:10:21.241246 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs podName:7206174b-645b-4924-8345-d1d4b1a5ec39 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:37.241228115 +0000 UTC m=+69.322165419 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs") pod "network-metrics-daemon-4b6k5" (UID: "7206174b-645b-4924-8345-d1d4b1a5ec39") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.265279 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.265319 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.265328 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.265341 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.265351 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:21Z","lastTransitionTime":"2026-02-14T04:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.369171 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.369235 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.369249 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.369272 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.369287 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:21Z","lastTransitionTime":"2026-02-14T04:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.418539 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.428237 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovnkube-controller/2.log" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.433252 4867 scope.go:117] "RemoveContainer" containerID="901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93" Feb 14 04:10:21 crc kubenswrapper[4867]: E0214 04:10:21.433451 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.460771 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:21Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.472278 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.472343 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.472358 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.472378 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.472389 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:21Z","lastTransitionTime":"2026-02-14T04:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.474394 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:21Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.490688 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:21Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.507459 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:21Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.521474 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:21Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.533357 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7206174b-645b-4924-8345-d1d4b1a5ec39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b6k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:21Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.548669 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:21Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.563000 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:21Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.575183 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.575262 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.575286 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.575318 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.575341 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:21Z","lastTransitionTime":"2026-02-14T04:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.580098 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:21Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.599605 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:21Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.613727 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:21Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.627261 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:21Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.646792 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:20Z\\\",\\\"message\\\":\\\".go:160\\\\nI0214 04:10:20.016141 6511 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016279 6511 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016402 6511 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 04:10:20.016537 6511 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 04:10:20.016835 6511 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:20.016886 6511 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:20.016916 6511 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 04:10:20.016948 6511 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 04:10:20.016968 6511 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:20.016972 6511 factory.go:656] Stopping watch factory\\\\nI0214 04:10:20.016992 6511 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:21Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.657900 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:21Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.671172 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:21Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.678387 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.678470 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.678493 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.678556 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.678582 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:21Z","lastTransitionTime":"2026-02-14T04:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.682476 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:21Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.780724 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.780778 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.780789 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.780805 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.780818 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:21Z","lastTransitionTime":"2026-02-14T04:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.883181 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.883249 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.883268 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.883293 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.883311 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:21Z","lastTransitionTime":"2026-02-14T04:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.986596 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.986650 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.986667 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.986694 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:21 crc kubenswrapper[4867]: I0214 04:10:21.986710 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:21Z","lastTransitionTime":"2026-02-14T04:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.089992 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.090055 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.090245 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.090266 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.090296 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:22Z","lastTransitionTime":"2026-02-14T04:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.107003 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 20:16:18.305377205 +0000 UTC Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.193668 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.193729 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.193748 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.193772 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.193789 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:22Z","lastTransitionTime":"2026-02-14T04:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.298266 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.298323 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.298339 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.298363 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.298380 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:22Z","lastTransitionTime":"2026-02-14T04:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.401429 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.401490 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.401581 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.401613 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.401636 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:22Z","lastTransitionTime":"2026-02-14T04:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.437420 4867 scope.go:117] "RemoveContainer" containerID="901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93" Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.437713 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.505412 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.505856 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.506053 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.506304 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.506441 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:22Z","lastTransitionTime":"2026-02-14T04:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.609169 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.609210 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.609222 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.609241 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.609254 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:22Z","lastTransitionTime":"2026-02-14T04:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.610433 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.610462 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.610472 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.610485 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.610495 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:22Z","lastTransitionTime":"2026-02-14T04:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.625870 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:22Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.630439 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.630468 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.630494 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.630531 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.630542 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:22Z","lastTransitionTime":"2026-02-14T04:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.644085 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:22Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.647920 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.647990 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.648005 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.648022 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.648244 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:22Z","lastTransitionTime":"2026-02-14T04:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.662392 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:22Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.666853 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.666919 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.666940 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.666965 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.667026 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:22Z","lastTransitionTime":"2026-02-14T04:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.688027 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:22Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.692879 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.692964 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.692982 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.693039 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.693069 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:22Z","lastTransitionTime":"2026-02-14T04:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.715032 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:22Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.715562 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.717901 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.717992 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.718012 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.718073 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.718091 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:22Z","lastTransitionTime":"2026-02-14T04:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.820458 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.820526 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.820538 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.820554 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.820565 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:22Z","lastTransitionTime":"2026-02-14T04:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.924024 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.924073 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.924087 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.924103 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.924115 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:22Z","lastTransitionTime":"2026-02-14T04:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.958706 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.958822 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.958852 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.958897 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.958923 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.959055 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.959072 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.959085 4867 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.959078 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:10:54.959006538 +0000 UTC m=+87.039943922 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.959132 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:54.959117331 +0000 UTC m=+87.040054655 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.959209 4867 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.959283 4867 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.959313 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.959480 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.959541 4867 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.959343 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:54.959306626 +0000 UTC m=+87.040243970 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.959660 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:54.959628185 +0000 UTC m=+87.040565539 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.959706 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 04:10:54.959686276 +0000 UTC m=+87.040623690 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.997156 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.997225 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.997281 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.997341 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:22 crc kubenswrapper[4867]: I0214 04:10:22.997176 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.997428 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.997549 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:22 crc kubenswrapper[4867]: E0214 04:10:22.997596 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.026288 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.026332 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.026344 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.026361 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.026373 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:23Z","lastTransitionTime":"2026-02-14T04:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.107142 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 17:33:49.629365913 +0000 UTC Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.129140 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.129179 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.129192 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.129208 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.129220 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:23Z","lastTransitionTime":"2026-02-14T04:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.232298 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.232351 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.232363 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.232379 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.232391 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:23Z","lastTransitionTime":"2026-02-14T04:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.335188 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.335220 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.335231 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.335245 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.335256 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:23Z","lastTransitionTime":"2026-02-14T04:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.438320 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.438367 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.438380 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.438398 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.438408 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:23Z","lastTransitionTime":"2026-02-14T04:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.540084 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.540116 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.540124 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.540135 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.540145 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:23Z","lastTransitionTime":"2026-02-14T04:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.642894 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.642948 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.642959 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.642976 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.642988 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:23Z","lastTransitionTime":"2026-02-14T04:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.746150 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.746207 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.746217 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.746233 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.746243 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:23Z","lastTransitionTime":"2026-02-14T04:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.848021 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.848077 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.848093 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.848111 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.848123 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:23Z","lastTransitionTime":"2026-02-14T04:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.950807 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.950838 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.950847 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.950860 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:23 crc kubenswrapper[4867]: I0214 04:10:23.950869 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:23Z","lastTransitionTime":"2026-02-14T04:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.053588 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.053632 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.053644 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.053661 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.053678 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:24Z","lastTransitionTime":"2026-02-14T04:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.108161 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 02:58:01.052186568 +0000 UTC Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.157184 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.157245 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.157256 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.157269 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.157278 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:24Z","lastTransitionTime":"2026-02-14T04:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.259577 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.259613 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.259621 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.259633 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.259641 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:24Z","lastTransitionTime":"2026-02-14T04:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.362629 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.362682 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.362699 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.362721 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.362737 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:24Z","lastTransitionTime":"2026-02-14T04:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.464912 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.464944 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.464952 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.464968 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.464977 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:24Z","lastTransitionTime":"2026-02-14T04:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.566659 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.566705 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.566722 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.566742 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.566753 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:24Z","lastTransitionTime":"2026-02-14T04:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.668636 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.668713 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.668723 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.668759 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.668770 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:24Z","lastTransitionTime":"2026-02-14T04:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.771314 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.771345 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.771353 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.771367 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.771376 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:24Z","lastTransitionTime":"2026-02-14T04:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.873748 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.873780 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.873812 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.873828 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.873853 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:24Z","lastTransitionTime":"2026-02-14T04:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.976420 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.976465 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.976476 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.976494 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.976523 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:24Z","lastTransitionTime":"2026-02-14T04:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.996946 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.996995 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.997039 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:24 crc kubenswrapper[4867]: I0214 04:10:24.996965 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:24 crc kubenswrapper[4867]: E0214 04:10:24.997090 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:24 crc kubenswrapper[4867]: E0214 04:10:24.997169 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:24 crc kubenswrapper[4867]: E0214 04:10:24.997223 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:24 crc kubenswrapper[4867]: E0214 04:10:24.997794 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.078470 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.078528 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.078537 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.078549 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.078558 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:25Z","lastTransitionTime":"2026-02-14T04:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.109021 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 20:49:53.87961093 +0000 UTC Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.180663 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.180707 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.180717 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.180730 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.180740 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:25Z","lastTransitionTime":"2026-02-14T04:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.283097 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.283127 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.283135 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.283147 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.283158 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:25Z","lastTransitionTime":"2026-02-14T04:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.384993 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.385024 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.385033 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.385045 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.385055 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:25Z","lastTransitionTime":"2026-02-14T04:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.541164 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.541232 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.541244 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.541263 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.541276 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:25Z","lastTransitionTime":"2026-02-14T04:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.643430 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.643481 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.643497 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.643546 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.643566 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:25Z","lastTransitionTime":"2026-02-14T04:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.745896 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.745933 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.745943 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.745960 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.745970 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:25Z","lastTransitionTime":"2026-02-14T04:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.756356 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.764647 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.771477 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:25Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.787021 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:25Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.797602 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:25Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.807227 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7206174b-645b-4924-8345-d1d4b1a5ec39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b6k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:25Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.818355 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:25Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.828853 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:25Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.839852 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:25Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.848926 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.848976 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.848994 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.849018 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.849034 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:25Z","lastTransitionTime":"2026-02-14T04:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.850684 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:25Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.861854 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:25Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.871544 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:25Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.882677 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:25Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.902184 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:20Z\\\",\\\"message\\\":\\\".go:160\\\\nI0214 04:10:20.016141 6511 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016279 6511 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016402 6511 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 04:10:20.016537 6511 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 04:10:20.016835 6511 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:20.016886 6511 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:20.016916 6511 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 04:10:20.016948 6511 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 04:10:20.016968 6511 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:20.016972 6511 factory.go:656] Stopping watch factory\\\\nI0214 04:10:20.016992 6511 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:25Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.911525 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:25Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.921285 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:25Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.932488 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:25Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.943219 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:25Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.951874 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.951912 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.951923 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.951940 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:25 crc kubenswrapper[4867]: I0214 04:10:25.951952 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:25Z","lastTransitionTime":"2026-02-14T04:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.053537 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.053564 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.053572 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.053584 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.053593 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:26Z","lastTransitionTime":"2026-02-14T04:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.109279 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 04:58:45.075792111 +0000 UTC Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.155313 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.155348 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.155357 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.155368 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.155377 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:26Z","lastTransitionTime":"2026-02-14T04:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.257197 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.257242 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.257256 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.257275 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.257292 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:26Z","lastTransitionTime":"2026-02-14T04:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.360124 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.360166 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.360177 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.360193 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.360204 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:26Z","lastTransitionTime":"2026-02-14T04:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.462470 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.462576 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.462600 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.462639 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.462660 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:26Z","lastTransitionTime":"2026-02-14T04:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.565882 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.565929 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.565940 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.565957 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.565970 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:26Z","lastTransitionTime":"2026-02-14T04:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.668181 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.668232 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.668241 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.668253 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.668268 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:26Z","lastTransitionTime":"2026-02-14T04:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.770252 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.770285 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.770293 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.770305 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.770315 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:26Z","lastTransitionTime":"2026-02-14T04:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.872673 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.872718 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.872728 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.872745 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.872758 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:26Z","lastTransitionTime":"2026-02-14T04:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.974533 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.974563 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.974573 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.974587 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.974598 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:26Z","lastTransitionTime":"2026-02-14T04:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.996330 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.996371 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.996414 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:26 crc kubenswrapper[4867]: E0214 04:10:26.996543 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:26 crc kubenswrapper[4867]: I0214 04:10:26.996436 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:26 crc kubenswrapper[4867]: E0214 04:10:26.996660 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:26 crc kubenswrapper[4867]: E0214 04:10:26.996786 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:26 crc kubenswrapper[4867]: E0214 04:10:26.996886 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.076441 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.076525 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.076542 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.076557 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.076580 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:27Z","lastTransitionTime":"2026-02-14T04:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.110007 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 05:29:51.426943728 +0000 UTC Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.179889 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.179963 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.179979 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.180001 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.180018 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:27Z","lastTransitionTime":"2026-02-14T04:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.282166 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.282245 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.282278 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.282307 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.282328 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:27Z","lastTransitionTime":"2026-02-14T04:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.384917 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.384959 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.384968 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.384981 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.384989 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:27Z","lastTransitionTime":"2026-02-14T04:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.487707 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.487822 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.487844 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.487872 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.487892 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:27Z","lastTransitionTime":"2026-02-14T04:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.590225 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.590305 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.590331 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.590365 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.590389 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:27Z","lastTransitionTime":"2026-02-14T04:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.693650 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.693746 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.693765 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.693822 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.693840 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:27Z","lastTransitionTime":"2026-02-14T04:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.796542 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.796592 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.796604 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.796619 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.796636 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:27Z","lastTransitionTime":"2026-02-14T04:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.899188 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.899244 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.899261 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.899284 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:27 crc kubenswrapper[4867]: I0214 04:10:27.899306 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:27Z","lastTransitionTime":"2026-02-14T04:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.002115 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.002156 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.002164 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.002176 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.002184 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:28Z","lastTransitionTime":"2026-02-14T04:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.104910 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.104965 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.104974 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.104986 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.104996 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:28Z","lastTransitionTime":"2026-02-14T04:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.110453 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 18:32:21.478362042 +0000 UTC Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.208256 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.208318 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.208334 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.208355 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.208367 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:28Z","lastTransitionTime":"2026-02-14T04:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.310616 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.310661 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.310673 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.310691 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.310706 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:28Z","lastTransitionTime":"2026-02-14T04:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.412708 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.412744 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.412755 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.412772 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.412784 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:28Z","lastTransitionTime":"2026-02-14T04:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.515971 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.516004 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.516013 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.516027 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.516036 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:28Z","lastTransitionTime":"2026-02-14T04:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.619848 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.619905 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.619921 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.619943 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.619960 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:28Z","lastTransitionTime":"2026-02-14T04:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.723334 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.723377 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.723387 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.723402 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.723412 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:28Z","lastTransitionTime":"2026-02-14T04:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.825948 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.826005 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.826020 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.826039 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.826052 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:28Z","lastTransitionTime":"2026-02-14T04:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.928323 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.928383 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.928397 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.928419 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.928435 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:28Z","lastTransitionTime":"2026-02-14T04:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.996272 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:28 crc kubenswrapper[4867]: E0214 04:10:28.996382 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.996441 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:28 crc kubenswrapper[4867]: E0214 04:10:28.996483 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.996620 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:28 crc kubenswrapper[4867]: E0214 04:10:28.996711 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:28 crc kubenswrapper[4867]: I0214 04:10:28.996912 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:28 crc kubenswrapper[4867]: E0214 04:10:28.996962 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.012047 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.026586 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.032972 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.033072 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.033143 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.033176 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.033399 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:29Z","lastTransitionTime":"2026-02-14T04:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.038532 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7eff54a-2d26-4335-ad76-c454354b64c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61da3ab9eb87eb886d6bdf805db38bcabc3db4334167f9e28fd6144269a76515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54a1f41a2a0e8fa5eae1575fc40b6f3240fe6ea8cafe6fd89a64e092e5b4602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0b9cac3faa5bfffa911cb16b70fa88a320b7bd9314d7a0ee0732b2a57afb90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.051215 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.061205 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7206174b-645b-4924-8345-d1d4b1a5ec39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b6k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.079963 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.096458 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.108365 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.110602 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 15:26:17.515223781 +0000 UTC Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.121476 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.136081 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.136742 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.136772 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.136798 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.136813 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.136822 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:29Z","lastTransitionTime":"2026-02-14T04:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.149331 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.161314 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.171347 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.184287 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.194804 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.206844 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.223050 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:20Z\\\",\\\"message\\\":\\\".go:160\\\\nI0214 04:10:20.016141 6511 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016279 6511 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016402 6511 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 04:10:20.016537 6511 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 04:10:20.016835 6511 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:20.016886 6511 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:20.016916 6511 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 04:10:20.016948 6511 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 04:10:20.016968 6511 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:20.016972 6511 factory.go:656] Stopping watch factory\\\\nI0214 04:10:20.016992 6511 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:29Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.238436 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.238485 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.238498 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.238654 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.238669 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:29Z","lastTransitionTime":"2026-02-14T04:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.341708 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.341747 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.341756 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.341768 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.341777 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:29Z","lastTransitionTime":"2026-02-14T04:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.444041 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.444082 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.444095 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.444113 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.444125 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:29Z","lastTransitionTime":"2026-02-14T04:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.546749 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.546812 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.546830 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.546854 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.546872 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:29Z","lastTransitionTime":"2026-02-14T04:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.649787 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.649831 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.649841 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.649857 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.649867 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:29Z","lastTransitionTime":"2026-02-14T04:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.752342 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.752401 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.752420 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.752444 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.752462 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:29Z","lastTransitionTime":"2026-02-14T04:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.855270 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.855317 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.855329 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.855344 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:29 crc kubenswrapper[4867]: I0214 04:10:29.855358 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:29Z","lastTransitionTime":"2026-02-14T04:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:29.958221 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:29.958274 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:29.958294 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:29.958323 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:29.958343 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:29Z","lastTransitionTime":"2026-02-14T04:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.060094 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.060141 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.060189 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.060218 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.060330 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:30Z","lastTransitionTime":"2026-02-14T04:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.111166 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 10:45:41.466352365 +0000 UTC Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.164869 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.164910 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.164919 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.164935 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.164945 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:30Z","lastTransitionTime":"2026-02-14T04:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.267124 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.267148 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.267156 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.267170 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.267178 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:30Z","lastTransitionTime":"2026-02-14T04:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.369025 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.369070 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.369082 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.369132 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.369144 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:30Z","lastTransitionTime":"2026-02-14T04:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.471319 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.471358 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.471366 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.471381 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.471391 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:30Z","lastTransitionTime":"2026-02-14T04:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.577031 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.577109 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.577124 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.577146 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.577158 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:30Z","lastTransitionTime":"2026-02-14T04:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.679141 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.679187 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.679225 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.679265 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.679276 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:30Z","lastTransitionTime":"2026-02-14T04:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.781800 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.781838 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.781850 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.781883 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.781894 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:30Z","lastTransitionTime":"2026-02-14T04:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.884242 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.884283 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.884291 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.884305 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.884313 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:30Z","lastTransitionTime":"2026-02-14T04:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.986612 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.986653 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.986661 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.986675 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.986686 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:30Z","lastTransitionTime":"2026-02-14T04:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.996879 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.996907 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.996936 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:30 crc kubenswrapper[4867]: I0214 04:10:30.996959 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:30 crc kubenswrapper[4867]: E0214 04:10:30.997019 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:30 crc kubenswrapper[4867]: E0214 04:10:30.997061 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:30 crc kubenswrapper[4867]: E0214 04:10:30.997141 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:30 crc kubenswrapper[4867]: E0214 04:10:30.997233 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.089278 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.089321 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.089331 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.089345 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.089355 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:31Z","lastTransitionTime":"2026-02-14T04:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.111555 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 09:29:04.870028864 +0000 UTC Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.191980 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.192019 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.192027 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.192042 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.192054 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:31Z","lastTransitionTime":"2026-02-14T04:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.294528 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.294573 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.294584 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.294601 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.294611 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:31Z","lastTransitionTime":"2026-02-14T04:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.397006 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.397035 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.397044 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.397059 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.397068 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:31Z","lastTransitionTime":"2026-02-14T04:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.498883 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.498913 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.498923 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.498936 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.498946 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:31Z","lastTransitionTime":"2026-02-14T04:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.603061 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.603104 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.603113 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.603127 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.603141 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:31Z","lastTransitionTime":"2026-02-14T04:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.704929 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.704975 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.704987 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.705005 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.705017 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:31Z","lastTransitionTime":"2026-02-14T04:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.807116 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.807154 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.807167 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.807181 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.807191 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:31Z","lastTransitionTime":"2026-02-14T04:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.909868 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.909948 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.909966 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.909997 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:31 crc kubenswrapper[4867]: I0214 04:10:31.910015 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:31Z","lastTransitionTime":"2026-02-14T04:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.013105 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.013149 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.013161 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.013179 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.013193 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:32Z","lastTransitionTime":"2026-02-14T04:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.112390 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 08:08:54.408153659 +0000 UTC Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.115825 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.115865 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.115874 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.115890 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.115904 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:32Z","lastTransitionTime":"2026-02-14T04:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.218827 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.218863 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.218871 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.218884 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.218893 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:32Z","lastTransitionTime":"2026-02-14T04:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.322685 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.322731 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.322741 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.322757 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.322768 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:32Z","lastTransitionTime":"2026-02-14T04:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.428850 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.428896 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.428905 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.428919 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.428929 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:32Z","lastTransitionTime":"2026-02-14T04:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.530983 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.531035 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.531044 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.531058 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.531067 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:32Z","lastTransitionTime":"2026-02-14T04:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.633347 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.633390 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.633398 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.633412 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.633426 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:32Z","lastTransitionTime":"2026-02-14T04:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.735681 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.736002 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.736144 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.736272 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.736597 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:32Z","lastTransitionTime":"2026-02-14T04:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.840260 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.840294 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.840303 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.840321 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.840333 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:32Z","lastTransitionTime":"2026-02-14T04:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.942986 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.943031 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.943040 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.943057 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:32 crc kubenswrapper[4867]: I0214 04:10:32.943068 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:32Z","lastTransitionTime":"2026-02-14T04:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:32.998477 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:33 crc kubenswrapper[4867]: E0214 04:10:32.998593 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:32.998758 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:33 crc kubenswrapper[4867]: E0214 04:10:32.998807 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:32.999069 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:33 crc kubenswrapper[4867]: E0214 04:10:32.999113 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:32.999145 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:33 crc kubenswrapper[4867]: E0214 04:10:32.999182 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.045083 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.045118 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.045130 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.045143 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.045152 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:33Z","lastTransitionTime":"2026-02-14T04:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.048210 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.048247 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.048257 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.048270 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.048279 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:33Z","lastTransitionTime":"2026-02-14T04:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:33 crc kubenswrapper[4867]: E0214 04:10:33.059356 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:33Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.062261 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.062295 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.062305 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.062318 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.062327 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:33Z","lastTransitionTime":"2026-02-14T04:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:33 crc kubenswrapper[4867]: E0214 04:10:33.072214 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:33Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.074836 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.074865 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.074873 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.074886 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.074896 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:33Z","lastTransitionTime":"2026-02-14T04:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:33 crc kubenswrapper[4867]: E0214 04:10:33.085744 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:33Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.088572 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.088597 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.088608 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.088623 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.088641 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:33Z","lastTransitionTime":"2026-02-14T04:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:33 crc kubenswrapper[4867]: E0214 04:10:33.100576 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:33Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.104130 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.104156 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.104165 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.104178 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.104188 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:33Z","lastTransitionTime":"2026-02-14T04:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.112638 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 23:49:05.778615726 +0000 UTC Feb 14 04:10:33 crc kubenswrapper[4867]: E0214 04:10:33.114579 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:33Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:33 crc kubenswrapper[4867]: E0214 04:10:33.114682 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.147402 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.147427 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.147435 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.147445 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.147454 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:33Z","lastTransitionTime":"2026-02-14T04:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.249838 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.249904 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.249944 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.249978 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.249986 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:33Z","lastTransitionTime":"2026-02-14T04:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.352786 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.352858 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.352880 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.352910 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.352933 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:33Z","lastTransitionTime":"2026-02-14T04:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.455244 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.455283 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.455292 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.455307 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.455316 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:33Z","lastTransitionTime":"2026-02-14T04:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.557990 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.558097 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.558122 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.558149 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.558168 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:33Z","lastTransitionTime":"2026-02-14T04:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.660071 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.660105 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.660116 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.660133 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.660143 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:33Z","lastTransitionTime":"2026-02-14T04:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.762875 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.762939 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.762954 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.762976 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.762993 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:33Z","lastTransitionTime":"2026-02-14T04:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.865781 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.865811 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.865821 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.865834 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.865843 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:33Z","lastTransitionTime":"2026-02-14T04:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.968417 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.968454 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.968463 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.968480 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:33 crc kubenswrapper[4867]: I0214 04:10:33.968491 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:33Z","lastTransitionTime":"2026-02-14T04:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.071187 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.071228 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.071239 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.071254 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.071267 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:34Z","lastTransitionTime":"2026-02-14T04:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.113513 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 17:50:08.313660573 +0000 UTC Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.173899 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.173930 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.173939 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.173953 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.173963 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:34Z","lastTransitionTime":"2026-02-14T04:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.276109 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.276144 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.276153 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.276168 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.276178 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:34Z","lastTransitionTime":"2026-02-14T04:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.379234 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.379278 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.379288 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.379303 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.379313 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:34Z","lastTransitionTime":"2026-02-14T04:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.481138 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.481172 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.481180 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.481194 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.481203 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:34Z","lastTransitionTime":"2026-02-14T04:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.583591 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.583639 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.583650 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.583668 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.583680 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:34Z","lastTransitionTime":"2026-02-14T04:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.685845 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.685883 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.685893 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.685908 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.685918 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:34Z","lastTransitionTime":"2026-02-14T04:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.787653 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.787698 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.787709 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.787726 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.787735 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:34Z","lastTransitionTime":"2026-02-14T04:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.889737 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.889774 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.889790 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.889806 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.889817 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:34Z","lastTransitionTime":"2026-02-14T04:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.991336 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.991376 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.991385 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.991399 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.991410 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:34Z","lastTransitionTime":"2026-02-14T04:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.996766 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:34 crc kubenswrapper[4867]: E0214 04:10:34.996935 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.996777 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:34 crc kubenswrapper[4867]: E0214 04:10:34.997134 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.996774 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:34 crc kubenswrapper[4867]: E0214 04:10:34.997379 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:34 crc kubenswrapper[4867]: I0214 04:10:34.996807 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:34 crc kubenswrapper[4867]: E0214 04:10:34.997776 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.093836 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.093893 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.093906 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.093918 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.093926 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:35Z","lastTransitionTime":"2026-02-14T04:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.113951 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 11:42:19.632037517 +0000 UTC Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.196084 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.196120 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.196130 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.196144 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.196153 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:35Z","lastTransitionTime":"2026-02-14T04:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.298482 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.298524 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.298536 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.298553 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.298565 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:35Z","lastTransitionTime":"2026-02-14T04:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.400768 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.400798 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.400806 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.400818 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.400826 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:35Z","lastTransitionTime":"2026-02-14T04:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.502995 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.503026 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.503054 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.503068 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.503076 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:35Z","lastTransitionTime":"2026-02-14T04:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.605414 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.605455 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.605463 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.605479 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.605489 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:35Z","lastTransitionTime":"2026-02-14T04:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.707603 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.707635 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.707642 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.707656 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.707664 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:35Z","lastTransitionTime":"2026-02-14T04:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.810053 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.810139 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.810151 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.810166 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.810177 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:35Z","lastTransitionTime":"2026-02-14T04:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.911877 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.911915 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.911926 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.911943 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:35 crc kubenswrapper[4867]: I0214 04:10:35.911955 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:35Z","lastTransitionTime":"2026-02-14T04:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.013631 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.013693 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.013706 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.013721 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.013731 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:36Z","lastTransitionTime":"2026-02-14T04:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.114481 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 12:40:05.242522148 +0000 UTC Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.116696 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.116750 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.116771 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.116797 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.116810 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:36Z","lastTransitionTime":"2026-02-14T04:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.219656 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.219722 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.219740 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.219773 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.219797 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:36Z","lastTransitionTime":"2026-02-14T04:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.323037 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.323585 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.323768 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.323936 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.324085 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:36Z","lastTransitionTime":"2026-02-14T04:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.428144 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.428209 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.428219 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.428239 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.428253 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:36Z","lastTransitionTime":"2026-02-14T04:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.532299 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.532370 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.532417 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.532470 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.532575 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:36Z","lastTransitionTime":"2026-02-14T04:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.634291 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.634352 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.634366 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.634383 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.634394 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:36Z","lastTransitionTime":"2026-02-14T04:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.736287 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.736330 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.736342 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.736358 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.736368 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:36Z","lastTransitionTime":"2026-02-14T04:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.838731 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.838773 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.838782 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.838797 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.838807 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:36Z","lastTransitionTime":"2026-02-14T04:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.940904 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.940950 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.940960 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.940976 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.940986 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:36Z","lastTransitionTime":"2026-02-14T04:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.996744 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.996803 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:36 crc kubenswrapper[4867]: E0214 04:10:36.996893 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.996928 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.996768 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:36 crc kubenswrapper[4867]: E0214 04:10:36.997058 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:36 crc kubenswrapper[4867]: E0214 04:10:36.997092 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:36 crc kubenswrapper[4867]: E0214 04:10:36.997176 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:36 crc kubenswrapper[4867]: I0214 04:10:36.997882 4867 scope.go:117] "RemoveContainer" containerID="901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93" Feb 14 04:10:36 crc kubenswrapper[4867]: E0214 04:10:36.998059 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.043285 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.043313 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.043320 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.043333 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.043342 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:37Z","lastTransitionTime":"2026-02-14T04:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.115175 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 23:13:16.935884822 +0000 UTC Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.145913 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.145948 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.145958 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.145972 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.145983 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:37Z","lastTransitionTime":"2026-02-14T04:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.248456 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.248499 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.248603 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.248622 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.248632 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:37Z","lastTransitionTime":"2026-02-14T04:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.314295 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs\") pod \"network-metrics-daemon-4b6k5\" (UID: \"7206174b-645b-4924-8345-d1d4b1a5ec39\") " pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:37 crc kubenswrapper[4867]: E0214 04:10:37.314468 4867 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 04:10:37 crc kubenswrapper[4867]: E0214 04:10:37.314642 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs podName:7206174b-645b-4924-8345-d1d4b1a5ec39 nodeName:}" failed. No retries permitted until 2026-02-14 04:11:09.314614387 +0000 UTC m=+101.395551781 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs") pod "network-metrics-daemon-4b6k5" (UID: "7206174b-645b-4924-8345-d1d4b1a5ec39") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.350932 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.350982 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.350993 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.351010 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.351022 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:37Z","lastTransitionTime":"2026-02-14T04:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.452920 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.452954 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.452962 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.452974 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.452984 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:37Z","lastTransitionTime":"2026-02-14T04:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.555717 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.555758 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.555766 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.555780 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.555788 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:37Z","lastTransitionTime":"2026-02-14T04:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.657773 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.657814 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.657825 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.657839 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.657851 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:37Z","lastTransitionTime":"2026-02-14T04:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.759954 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.759991 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.760000 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.760032 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.760041 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:37Z","lastTransitionTime":"2026-02-14T04:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.862124 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.862176 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.862187 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.862204 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.862216 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:37Z","lastTransitionTime":"2026-02-14T04:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.964373 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.964402 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.964410 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.964423 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:37 crc kubenswrapper[4867]: I0214 04:10:37.964432 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:37Z","lastTransitionTime":"2026-02-14T04:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.067486 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.067542 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.067554 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.067595 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.067606 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:38Z","lastTransitionTime":"2026-02-14T04:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.115829 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 20:02:42.907822636 +0000 UTC Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.169867 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.169909 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.169918 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.170090 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.170104 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:38Z","lastTransitionTime":"2026-02-14T04:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.272348 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.272420 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.272434 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.272672 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.272685 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:38Z","lastTransitionTime":"2026-02-14T04:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.375425 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.375451 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.375460 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.375472 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.375480 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:38Z","lastTransitionTime":"2026-02-14T04:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.477972 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.478004 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.478015 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.478032 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.478043 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:38Z","lastTransitionTime":"2026-02-14T04:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.579933 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.579963 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.579971 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.579984 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.579993 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:38Z","lastTransitionTime":"2026-02-14T04:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.681721 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.681763 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.681778 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.681796 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.681807 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:38Z","lastTransitionTime":"2026-02-14T04:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.784281 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.784305 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.784313 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.784327 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.784335 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:38Z","lastTransitionTime":"2026-02-14T04:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.886382 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.886441 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.886451 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.886464 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.886474 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:38Z","lastTransitionTime":"2026-02-14T04:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.988664 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.988701 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.988714 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.988731 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.988741 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:38Z","lastTransitionTime":"2026-02-14T04:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.996908 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:38 crc kubenswrapper[4867]: E0214 04:10:38.997027 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.997095 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.997205 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:38 crc kubenswrapper[4867]: E0214 04:10:38.997251 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:38 crc kubenswrapper[4867]: E0214 04:10:38.997249 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:38 crc kubenswrapper[4867]: I0214 04:10:38.997539 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:38 crc kubenswrapper[4867]: E0214 04:10:38.997609 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.011759 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.027037 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.037952 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.046640 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7206174b-645b-4924-8345-d1d4b1a5ec39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b6k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.059230 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.068178 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.082345 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.090745 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.090772 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.090781 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.090794 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.090802 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:39Z","lastTransitionTime":"2026-02-14T04:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.093581 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.112001 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.116458 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 09:09:11.816422242 +0000 UTC Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.120809 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.132567 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.148889 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:20Z\\\",\\\"message\\\":\\\".go:160\\\\nI0214 04:10:20.016141 6511 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016279 6511 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016402 6511 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 04:10:20.016537 6511 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 04:10:20.016835 6511 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:20.016886 6511 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:20.016916 6511 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 04:10:20.016948 6511 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 04:10:20.016968 6511 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:20.016972 6511 factory.go:656] Stopping watch factory\\\\nI0214 04:10:20.016992 6511 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.158659 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.168802 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.179843 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7eff54a-2d26-4335-ad76-c454354b64c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61da3ab9eb87eb886d6bdf805db38bcabc3db4334167f9e28fd6144269a76515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54a1f41a2a0e8fa5eae1575fc40b6f3240fe6ea8cafe6fd89a64e092e5b4602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0b9cac3faa5bfffa911cb16b70fa88a320b7bd9314d7a0ee0732b2a57afb90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.193267 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.193686 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.193729 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.193741 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.193757 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.193768 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:39Z","lastTransitionTime":"2026-02-14T04:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.206306 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.296574 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.296613 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.296626 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.296642 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.296653 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:39Z","lastTransitionTime":"2026-02-14T04:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.398880 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.398913 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.398923 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.398936 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.398946 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:39Z","lastTransitionTime":"2026-02-14T04:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.485390 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fl729_fb77d03e-6ead-48b5-a96a-db4cbd540192/kube-multus/0.log" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.485438 4867 generic.go:334] "Generic (PLEG): container finished" podID="fb77d03e-6ead-48b5-a96a-db4cbd540192" containerID="6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7" exitCode=1 Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.485466 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fl729" event={"ID":"fb77d03e-6ead-48b5-a96a-db4cbd540192","Type":"ContainerDied","Data":"6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7"} Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.485790 4867 scope.go:117] "RemoveContainer" containerID="6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.497065 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7eff54a-2d26-4335-ad76-c454354b64c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61da3ab9eb87eb886d6bdf805db38bcabc3db4334167f9e28fd6144269a76515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54a1f41a2a0e8fa5eae1575fc40b6f3240fe6ea8cafe6fd89a64e092e5b4602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0b9cac3faa5bfffa911cb16b70fa88a320b7bd9314d7a0ee0732b2a57afb90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.500467 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.500852 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.500934 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.501033 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.501134 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:39Z","lastTransitionTime":"2026-02-14T04:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.508125 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.518695 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.533099 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.544802 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:38Z\\\",\\\"message\\\":\\\"2026-02-14T04:09:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a3f597f2-b921-47ce-8faa-6d588a62271b\\\\n2026-02-14T04:09:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a3f597f2-b921-47ce-8faa-6d588a62271b to /host/opt/cni/bin/\\\\n2026-02-14T04:09:53Z [verbose] multus-daemon started\\\\n2026-02-14T04:09:53Z [verbose] Readiness Indicator file check\\\\n2026-02-14T04:10:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.559104 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.570215 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.580933 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7206174b-645b-4924-8345-d1d4b1a5ec39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b6k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.592525 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.603912 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.604048 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.604074 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.604082 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.604098 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.604108 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:39Z","lastTransitionTime":"2026-02-14T04:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.614614 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.627421 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.636149 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.646047 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.662272 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:20Z\\\",\\\"message\\\":\\\".go:160\\\\nI0214 04:10:20.016141 6511 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016279 6511 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016402 6511 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 04:10:20.016537 6511 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 04:10:20.016835 6511 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:20.016886 6511 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:20.016916 6511 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 04:10:20.016948 6511 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 04:10:20.016968 6511 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:20.016972 6511 factory.go:656] Stopping watch factory\\\\nI0214 04:10:20.016992 6511 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.671193 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.682721 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:39Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.706081 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.706128 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.706138 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.706152 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.706162 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:39Z","lastTransitionTime":"2026-02-14T04:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.808077 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.808113 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.808123 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.808136 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.808144 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:39Z","lastTransitionTime":"2026-02-14T04:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.911304 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.911344 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.911354 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.911368 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:39 crc kubenswrapper[4867]: I0214 04:10:39.911377 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:39Z","lastTransitionTime":"2026-02-14T04:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.013176 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.013227 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.013238 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.013257 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.013270 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:40Z","lastTransitionTime":"2026-02-14T04:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.115725 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.115770 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.115779 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.115795 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.115804 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:40Z","lastTransitionTime":"2026-02-14T04:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.116809 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 13:59:06.850031552 +0000 UTC Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.217772 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.217808 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.217817 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.217831 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.217842 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:40Z","lastTransitionTime":"2026-02-14T04:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.320184 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.320222 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.320229 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.320242 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.320250 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:40Z","lastTransitionTime":"2026-02-14T04:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.422476 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.422530 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.422540 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.422554 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.422563 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:40Z","lastTransitionTime":"2026-02-14T04:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.490540 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fl729_fb77d03e-6ead-48b5-a96a-db4cbd540192/kube-multus/0.log" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.490594 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fl729" event={"ID":"fb77d03e-6ead-48b5-a96a-db4cbd540192","Type":"ContainerStarted","Data":"2556cf2433d1b1241d711139b8c66aabe3f12046f37c0f19b972b8306ff7917b"} Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.501472 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.515842 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.525354 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.525391 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.525399 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.525413 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.525421 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:40Z","lastTransitionTime":"2026-02-14T04:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.526708 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.539232 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.558613 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:20Z\\\",\\\"message\\\":\\\".go:160\\\\nI0214 04:10:20.016141 6511 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016279 6511 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016402 6511 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 04:10:20.016537 6511 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 04:10:20.016835 6511 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:20.016886 6511 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:20.016916 6511 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 04:10:20.016948 6511 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 04:10:20.016968 6511 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:20.016972 6511 factory.go:656] Stopping watch factory\\\\nI0214 04:10:20.016992 6511 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.571796 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.583195 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.604844 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7eff54a-2d26-4335-ad76-c454354b64c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61da3ab9eb87eb886d6bdf805db38bcabc3db4334167f9e28fd6144269a76515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54a1f41a2a0e8fa5eae1575fc40b6f3240fe6ea8cafe6fd89a64e092e5b4602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0b9cac3faa5bfffa911cb16b70fa88a320b7bd9314d7a0ee0732b2a57afb90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.623645 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.629544 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.629581 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.629589 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.629603 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.629613 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:40Z","lastTransitionTime":"2026-02-14T04:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.638046 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7206174b-645b-4924-8345-d1d4b1a5ec39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b6k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.654778 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2556cf2433d1b1241d711139b8c66aabe3f12046f37c0f19b972b8306ff7917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:38Z\\\",\\\"message\\\":\\\"2026-02-14T04:09:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a3f597f2-b921-47ce-8faa-6d588a62271b\\\\n2026-02-14T04:09:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a3f597f2-b921-47ce-8faa-6d588a62271b to /host/opt/cni/bin/\\\\n2026-02-14T04:09:53Z [verbose] multus-daemon started\\\\n2026-02-14T04:09:53Z [verbose] Readiness Indicator file check\\\\n2026-02-14T04:10:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.672712 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.688038 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.705079 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.719093 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.732069 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.732105 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.732114 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.732136 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.732149 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:40Z","lastTransitionTime":"2026-02-14T04:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.733495 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.744080 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:40Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.834593 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.834701 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.834723 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.834750 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.834771 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:40Z","lastTransitionTime":"2026-02-14T04:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.937576 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.937627 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.937637 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.937654 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.937664 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:40Z","lastTransitionTime":"2026-02-14T04:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.996565 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.996609 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.996630 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:40 crc kubenswrapper[4867]: E0214 04:10:40.996729 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:40 crc kubenswrapper[4867]: E0214 04:10:40.996908 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:40 crc kubenswrapper[4867]: E0214 04:10:40.996951 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:40 crc kubenswrapper[4867]: I0214 04:10:40.996968 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:40 crc kubenswrapper[4867]: E0214 04:10:40.997171 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.040335 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.040378 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.040391 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.040412 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.040427 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:41Z","lastTransitionTime":"2026-02-14T04:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.116893 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 21:21:30.684385757 +0000 UTC Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.143046 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.143121 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.143160 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.143194 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.143219 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:41Z","lastTransitionTime":"2026-02-14T04:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.245969 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.246024 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.246034 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.246049 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.246061 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:41Z","lastTransitionTime":"2026-02-14T04:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.348872 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.348910 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.348919 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.348932 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.348942 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:41Z","lastTransitionTime":"2026-02-14T04:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.451386 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.451437 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.451449 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.451466 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.451479 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:41Z","lastTransitionTime":"2026-02-14T04:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.554970 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.555021 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.555046 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.555068 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.555081 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:41Z","lastTransitionTime":"2026-02-14T04:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.657390 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.657455 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.657468 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.657485 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.657498 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:41Z","lastTransitionTime":"2026-02-14T04:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.759451 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.759518 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.759528 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.759545 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.759555 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:41Z","lastTransitionTime":"2026-02-14T04:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.861682 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.861749 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.861759 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.861795 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.861808 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:41Z","lastTransitionTime":"2026-02-14T04:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.964797 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.964837 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.964845 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.964859 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:41 crc kubenswrapper[4867]: I0214 04:10:41.964869 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:41Z","lastTransitionTime":"2026-02-14T04:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.066932 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.066969 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.066978 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.067062 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.067074 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:42Z","lastTransitionTime":"2026-02-14T04:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.117933 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 01:42:20.592576057 +0000 UTC Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.169253 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.169322 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.169343 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.169374 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.169394 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:42Z","lastTransitionTime":"2026-02-14T04:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.271541 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.271588 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.271599 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.271617 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.271627 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:42Z","lastTransitionTime":"2026-02-14T04:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.374368 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.374471 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.374497 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.374572 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.374600 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:42Z","lastTransitionTime":"2026-02-14T04:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.478121 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.478210 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.479574 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.479609 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.479618 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:42Z","lastTransitionTime":"2026-02-14T04:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.583650 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.583718 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.583736 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.583765 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.583787 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:42Z","lastTransitionTime":"2026-02-14T04:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.687194 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.687247 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.687258 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.687274 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.687284 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:42Z","lastTransitionTime":"2026-02-14T04:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.789532 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.789577 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.789588 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.789605 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.789617 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:42Z","lastTransitionTime":"2026-02-14T04:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.892233 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.892298 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.892316 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.892349 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.892369 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:42Z","lastTransitionTime":"2026-02-14T04:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.995789 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.995835 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.995846 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.995866 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.995880 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:42Z","lastTransitionTime":"2026-02-14T04:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.996210 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.996304 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.996668 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:42 crc kubenswrapper[4867]: I0214 04:10:42.996720 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:42 crc kubenswrapper[4867]: E0214 04:10:42.996724 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:42 crc kubenswrapper[4867]: E0214 04:10:42.996810 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:42 crc kubenswrapper[4867]: E0214 04:10:42.996994 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:42 crc kubenswrapper[4867]: E0214 04:10:42.997083 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.099797 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.099841 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.099853 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.099871 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.099886 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:43Z","lastTransitionTime":"2026-02-14T04:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.118748 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 22:25:39.04593179 +0000 UTC Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.203420 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.203499 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.203556 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.203589 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.203611 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:43Z","lastTransitionTime":"2026-02-14T04:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.205072 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.205116 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.205131 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.205151 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.205166 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:43Z","lastTransitionTime":"2026-02-14T04:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:43 crc kubenswrapper[4867]: E0214 04:10:43.226792 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:43Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.231939 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.231980 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.231989 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.232004 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.232012 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:43Z","lastTransitionTime":"2026-02-14T04:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:43 crc kubenswrapper[4867]: E0214 04:10:43.249246 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:43Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.254298 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.254329 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.254339 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.254356 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.254365 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:43Z","lastTransitionTime":"2026-02-14T04:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:43 crc kubenswrapper[4867]: E0214 04:10:43.272301 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:43Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.276601 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.276694 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.276717 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.276746 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.276774 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:43Z","lastTransitionTime":"2026-02-14T04:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:43 crc kubenswrapper[4867]: E0214 04:10:43.297339 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:43Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.307721 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.307813 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.307836 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.307864 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.307902 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:43Z","lastTransitionTime":"2026-02-14T04:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:43 crc kubenswrapper[4867]: E0214 04:10:43.332283 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:43Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:43 crc kubenswrapper[4867]: E0214 04:10:43.332882 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.345976 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.346044 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.346061 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.346094 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.346115 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:43Z","lastTransitionTime":"2026-02-14T04:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.448919 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.448965 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.448974 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.448989 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.448999 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:43Z","lastTransitionTime":"2026-02-14T04:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.551437 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.551490 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.551531 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.551554 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.551570 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:43Z","lastTransitionTime":"2026-02-14T04:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.654409 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.654466 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.654482 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.654539 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.654560 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:43Z","lastTransitionTime":"2026-02-14T04:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.757801 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.757850 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.757859 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.757875 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.757885 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:43Z","lastTransitionTime":"2026-02-14T04:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.860434 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.860469 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.860479 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.860494 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.860528 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:43Z","lastTransitionTime":"2026-02-14T04:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.962938 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.962982 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.962994 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.963013 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:43 crc kubenswrapper[4867]: I0214 04:10:43.963024 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:43Z","lastTransitionTime":"2026-02-14T04:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.067041 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.067102 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.067116 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.067137 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.067152 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:44Z","lastTransitionTime":"2026-02-14T04:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.119422 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 10:38:26.932697211 +0000 UTC Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.169779 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.169860 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.169879 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.169908 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.169934 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:44Z","lastTransitionTime":"2026-02-14T04:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.272769 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.272843 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.272865 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.272898 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.272919 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:44Z","lastTransitionTime":"2026-02-14T04:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.375533 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.375560 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.375569 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.375583 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.375591 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:44Z","lastTransitionTime":"2026-02-14T04:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.478559 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.478626 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.478642 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.478670 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.478689 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:44Z","lastTransitionTime":"2026-02-14T04:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.580204 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.580239 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.580248 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.580267 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.580278 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:44Z","lastTransitionTime":"2026-02-14T04:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.683263 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.683365 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.683387 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.683423 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.683447 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:44Z","lastTransitionTime":"2026-02-14T04:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.787191 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.787251 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.787261 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.787280 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.787292 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:44Z","lastTransitionTime":"2026-02-14T04:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.890587 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.891109 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.891198 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.891362 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.891566 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:44Z","lastTransitionTime":"2026-02-14T04:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.994062 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.994103 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.994112 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.994127 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.994137 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:44Z","lastTransitionTime":"2026-02-14T04:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.996335 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:44 crc kubenswrapper[4867]: E0214 04:10:44.996445 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.996632 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:44 crc kubenswrapper[4867]: E0214 04:10:44.996706 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.996791 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:44 crc kubenswrapper[4867]: I0214 04:10:44.996855 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:44 crc kubenswrapper[4867]: E0214 04:10:44.996997 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:44 crc kubenswrapper[4867]: E0214 04:10:44.997145 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.096971 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.097005 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.097018 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.097034 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.097046 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:45Z","lastTransitionTime":"2026-02-14T04:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.120440 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 19:10:38.63329086 +0000 UTC Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.199983 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.200009 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.200017 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.200030 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.200039 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:45Z","lastTransitionTime":"2026-02-14T04:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.303851 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.304003 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.304027 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.304051 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.304069 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:45Z","lastTransitionTime":"2026-02-14T04:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.407625 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.407722 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.407746 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.407777 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.407801 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:45Z","lastTransitionTime":"2026-02-14T04:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.509243 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.509270 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.509278 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.509290 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.509298 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:45Z","lastTransitionTime":"2026-02-14T04:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.612213 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.612244 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.612252 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.612265 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.612274 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:45Z","lastTransitionTime":"2026-02-14T04:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.715187 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.715225 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.715233 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.715246 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.715254 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:45Z","lastTransitionTime":"2026-02-14T04:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.820850 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.820918 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.820932 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.820953 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.820967 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:45Z","lastTransitionTime":"2026-02-14T04:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.923307 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.923351 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.923363 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.923379 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:45 crc kubenswrapper[4867]: I0214 04:10:45.923390 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:45Z","lastTransitionTime":"2026-02-14T04:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.026667 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.026730 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.026741 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.026755 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.026768 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:46Z","lastTransitionTime":"2026-02-14T04:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.121438 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 07:00:03.702248693 +0000 UTC Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.131797 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.131861 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.131878 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.131907 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.131928 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:46Z","lastTransitionTime":"2026-02-14T04:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.235149 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.235214 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.235225 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.235242 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.235255 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:46Z","lastTransitionTime":"2026-02-14T04:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.338013 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.338082 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.338099 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.338126 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.338143 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:46Z","lastTransitionTime":"2026-02-14T04:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.441482 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.441571 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.441592 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.441611 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.441624 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:46Z","lastTransitionTime":"2026-02-14T04:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.544681 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.544722 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.544734 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.544747 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.544756 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:46Z","lastTransitionTime":"2026-02-14T04:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.647374 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.647423 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.647433 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.647450 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.647460 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:46Z","lastTransitionTime":"2026-02-14T04:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.750422 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.750466 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.750476 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.750491 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.750521 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:46Z","lastTransitionTime":"2026-02-14T04:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.853640 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.853703 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.853717 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.853734 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.853747 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:46Z","lastTransitionTime":"2026-02-14T04:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.956334 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.956379 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.956388 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.956402 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.956411 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:46Z","lastTransitionTime":"2026-02-14T04:10:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.996674 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.996791 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:46 crc kubenswrapper[4867]: E0214 04:10:46.996829 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.996863 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:46 crc kubenswrapper[4867]: I0214 04:10:46.996920 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:46 crc kubenswrapper[4867]: E0214 04:10:46.997060 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:46 crc kubenswrapper[4867]: E0214 04:10:46.997181 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:46 crc kubenswrapper[4867]: E0214 04:10:46.997279 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.058285 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.058327 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.058336 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.058352 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.058365 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:47Z","lastTransitionTime":"2026-02-14T04:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.121705 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 00:31:37.838659421 +0000 UTC Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.160762 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.160796 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.160812 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.160828 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.160839 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:47Z","lastTransitionTime":"2026-02-14T04:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.262831 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.262865 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.262873 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.262886 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.262897 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:47Z","lastTransitionTime":"2026-02-14T04:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.365325 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.365367 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.365378 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.365392 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.365402 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:47Z","lastTransitionTime":"2026-02-14T04:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.467676 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.467717 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.467726 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.467740 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.467754 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:47Z","lastTransitionTime":"2026-02-14T04:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.570157 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.570207 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.570218 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.570235 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.570246 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:47Z","lastTransitionTime":"2026-02-14T04:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.672393 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.672437 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.672449 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.672465 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.672477 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:47Z","lastTransitionTime":"2026-02-14T04:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.776436 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.776497 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.776564 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.776591 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.776613 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:47Z","lastTransitionTime":"2026-02-14T04:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.879264 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.879329 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.879371 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.879399 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.879416 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:47Z","lastTransitionTime":"2026-02-14T04:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.981667 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.981708 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.981718 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.981732 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:47 crc kubenswrapper[4867]: I0214 04:10:47.981741 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:47Z","lastTransitionTime":"2026-02-14T04:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.083803 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.083838 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.083847 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.083861 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.083869 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:48Z","lastTransitionTime":"2026-02-14T04:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.149302 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 21:44:08.910983852 +0000 UTC Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.186561 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.186603 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.186614 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.186635 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.186647 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:48Z","lastTransitionTime":"2026-02-14T04:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.289097 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.289134 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.289146 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.289161 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.289172 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:48Z","lastTransitionTime":"2026-02-14T04:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.391656 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.391706 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.391719 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.391736 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.391748 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:48Z","lastTransitionTime":"2026-02-14T04:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.493900 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.493972 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.493984 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.494002 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.494014 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:48Z","lastTransitionTime":"2026-02-14T04:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.596411 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.596453 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.596462 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.596475 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.596484 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:48Z","lastTransitionTime":"2026-02-14T04:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.698222 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.698267 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.698275 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.698291 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.698302 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:48Z","lastTransitionTime":"2026-02-14T04:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.800944 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.801007 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.801021 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.801042 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.801061 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:48Z","lastTransitionTime":"2026-02-14T04:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.903105 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.903143 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.903151 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.903165 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.903174 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:48Z","lastTransitionTime":"2026-02-14T04:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.997017 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.997123 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:48 crc kubenswrapper[4867]: E0214 04:10:48.997359 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.997497 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:48 crc kubenswrapper[4867]: E0214 04:10:48.997572 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:48 crc kubenswrapper[4867]: E0214 04:10:48.997703 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:48 crc kubenswrapper[4867]: I0214 04:10:48.997777 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:48 crc kubenswrapper[4867]: E0214 04:10:48.997902 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.005337 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.005585 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.005744 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.005883 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.005981 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:49Z","lastTransitionTime":"2026-02-14T04:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.013469 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.025116 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.036607 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.046591 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.062071 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.075269 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.090140 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.106863 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:20Z\\\",\\\"message\\\":\\\".go:160\\\\nI0214 04:10:20.016141 6511 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016279 6511 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016402 6511 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 04:10:20.016537 6511 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 04:10:20.016835 6511 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:20.016886 6511 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:20.016916 6511 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 04:10:20.016948 6511 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 04:10:20.016968 6511 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:20.016972 6511 factory.go:656] Stopping watch factory\\\\nI0214 04:10:20.016992 6511 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.109236 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.109269 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.109281 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.109298 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.109310 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:49Z","lastTransitionTime":"2026-02-14T04:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.119630 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.134991 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.149274 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7eff54a-2d26-4335-ad76-c454354b64c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61da3ab9eb87eb886d6bdf805db38bcabc3db4334167f9e28fd6144269a76515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54a1f41a2a0e8fa5eae1575fc40b6f3240fe6ea8cafe6fd89a64e092e5b4602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0b9cac3faa5bfffa911cb16b70fa88a320b7bd9314d7a0ee0732b2a57afb90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.149394 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 04:41:41.583765224 +0000 UTC Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.164598 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.182896 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.196168 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2556cf2433d1b1241d711139b8c66aabe3f12046f37c0f19b972b8306ff7917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:38Z\\\",\\\"message\\\":\\\"2026-02-14T04:09:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a3f597f2-b921-47ce-8faa-6d588a62271b\\\\n2026-02-14T04:09:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a3f597f2-b921-47ce-8faa-6d588a62271b to /host/opt/cni/bin/\\\\n2026-02-14T04:09:53Z [verbose] multus-daemon started\\\\n2026-02-14T04:09:53Z [verbose] Readiness Indicator file check\\\\n2026-02-14T04:10:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.211430 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.211464 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.211474 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.211490 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.211515 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:49Z","lastTransitionTime":"2026-02-14T04:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.218768 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.230624 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.247360 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7206174b-645b-4924-8345-d1d4b1a5ec39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b6k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:49Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.313361 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.313414 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.313425 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.313439 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.313448 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:49Z","lastTransitionTime":"2026-02-14T04:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.415626 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.415654 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.415664 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.415679 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.415689 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:49Z","lastTransitionTime":"2026-02-14T04:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.518560 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.518615 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.518645 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.518666 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.518681 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:49Z","lastTransitionTime":"2026-02-14T04:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.621987 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.622051 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.622062 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.622098 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.622111 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:49Z","lastTransitionTime":"2026-02-14T04:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.731203 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.731257 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.731271 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.731293 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.731305 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:49Z","lastTransitionTime":"2026-02-14T04:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.834956 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.835007 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.835022 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.835043 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.835059 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:49Z","lastTransitionTime":"2026-02-14T04:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.938117 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.938185 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.938198 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.938234 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:49 crc kubenswrapper[4867]: I0214 04:10:49.938249 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:49Z","lastTransitionTime":"2026-02-14T04:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.040792 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.040828 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.040836 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.040851 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.040859 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:50Z","lastTransitionTime":"2026-02-14T04:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.143696 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.143729 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.143737 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.143751 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.143761 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:50Z","lastTransitionTime":"2026-02-14T04:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.150321 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 07:08:15.741219139 +0000 UTC Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.246239 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.246267 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.246276 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.246288 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.246296 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:50Z","lastTransitionTime":"2026-02-14T04:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.348339 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.348463 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.348478 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.348493 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.348519 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:50Z","lastTransitionTime":"2026-02-14T04:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.450573 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.450613 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.450621 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.450635 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.450645 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:50Z","lastTransitionTime":"2026-02-14T04:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.553208 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.553267 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.553283 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.553309 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.553323 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:50Z","lastTransitionTime":"2026-02-14T04:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.655674 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.655726 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.655739 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.655757 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.655771 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:50Z","lastTransitionTime":"2026-02-14T04:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.758841 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.758890 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.758902 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.758921 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.758933 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:50Z","lastTransitionTime":"2026-02-14T04:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.861853 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.861912 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.861932 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.861956 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.861972 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:50Z","lastTransitionTime":"2026-02-14T04:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.964208 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.964255 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.964266 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.964283 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.964302 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:50Z","lastTransitionTime":"2026-02-14T04:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.996854 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.996907 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.996866 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:50 crc kubenswrapper[4867]: E0214 04:10:50.997014 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.997196 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:50 crc kubenswrapper[4867]: E0214 04:10:50.997299 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:50 crc kubenswrapper[4867]: E0214 04:10:50.997369 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:50 crc kubenswrapper[4867]: E0214 04:10:50.997497 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:50 crc kubenswrapper[4867]: I0214 04:10:50.998148 4867 scope.go:117] "RemoveContainer" containerID="901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.010851 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.066679 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.066716 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.066724 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.066736 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.066747 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:51Z","lastTransitionTime":"2026-02-14T04:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.151201 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 15:36:17.77420617 +0000 UTC Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.170064 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.170121 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.170130 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.170144 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.170154 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:51Z","lastTransitionTime":"2026-02-14T04:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.272625 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.272663 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.272676 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.272698 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.272710 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:51Z","lastTransitionTime":"2026-02-14T04:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.375936 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.376134 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.376213 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.376302 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.376382 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:51Z","lastTransitionTime":"2026-02-14T04:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.479574 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.479990 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.480419 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.480554 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.480656 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:51Z","lastTransitionTime":"2026-02-14T04:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.583233 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.583758 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.583777 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.583803 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.583821 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:51Z","lastTransitionTime":"2026-02-14T04:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.686424 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.686468 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.686479 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.686492 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.686501 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:51Z","lastTransitionTime":"2026-02-14T04:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.789390 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.789452 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.789471 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.789496 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.789555 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:51Z","lastTransitionTime":"2026-02-14T04:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.893041 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.893173 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.893195 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.893280 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.893308 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:51Z","lastTransitionTime":"2026-02-14T04:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.996304 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.996409 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.996432 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.996462 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:51 crc kubenswrapper[4867]: I0214 04:10:51.996487 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:51Z","lastTransitionTime":"2026-02-14T04:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.098973 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.099014 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.099023 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.099036 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.099045 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:52Z","lastTransitionTime":"2026-02-14T04:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.151442 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 05:25:43.888723023 +0000 UTC Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.201393 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.201440 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.201452 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.201470 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.201483 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:52Z","lastTransitionTime":"2026-02-14T04:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.303707 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.303740 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.303768 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.303783 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.303793 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:52Z","lastTransitionTime":"2026-02-14T04:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.406231 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.406274 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.406283 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.406298 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.406309 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:52Z","lastTransitionTime":"2026-02-14T04:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.508465 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.508534 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.508545 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.508559 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.508569 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:52Z","lastTransitionTime":"2026-02-14T04:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.531078 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovnkube-controller/3.log" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.531736 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovnkube-controller/2.log" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.534160 4867 generic.go:334] "Generic (PLEG): container finished" podID="34391a30-5865-46e9-af5f-705cc3b11fba" containerID="97e1fa8b3d99d969cac9ac1d4bdd1161353186d3cf50512e692adeee0f21778a" exitCode=1 Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.534204 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerDied","Data":"97e1fa8b3d99d969cac9ac1d4bdd1161353186d3cf50512e692adeee0f21778a"} Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.534248 4867 scope.go:117] "RemoveContainer" containerID="901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.535211 4867 scope.go:117] "RemoveContainer" containerID="97e1fa8b3d99d969cac9ac1d4bdd1161353186d3cf50512e692adeee0f21778a" Feb 14 04:10:52 crc kubenswrapper[4867]: E0214 04:10:52.535441 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.548337 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.561405 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.570179 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.581577 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.597849 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e1fa8b3d99d969cac9ac1d4bdd1161353186d3cf50512e692adeee0f21778a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:20Z\\\",\\\"message\\\":\\\".go:160\\\\nI0214 04:10:20.016141 6511 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016279 6511 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016402 6511 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 04:10:20.016537 6511 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 04:10:20.016835 6511 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:20.016886 6511 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:20.016916 6511 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 04:10:20.016948 6511 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 04:10:20.016968 6511 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:20.016972 6511 factory.go:656] Stopping watch factory\\\\nI0214 04:10:20.016992 6511 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e1fa8b3d99d969cac9ac1d4bdd1161353186d3cf50512e692adeee0f21778a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:52Z\\\",\\\"message\\\":\\\"et:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0214 04:10:52.432610 6954 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0214 04:10:52.432921 6954 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0214 04:10:52.432929 6954 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0214 04:10:52.432932 6954 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?time\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.610232 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.610763 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.610798 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.610808 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.610823 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.610832 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:52Z","lastTransitionTime":"2026-02-14T04:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.622560 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.634076 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7eff54a-2d26-4335-ad76-c454354b64c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61da3ab9eb87eb886d6bdf805db38bcabc3db4334167f9e28fd6144269a76515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54a1f41a2a0e8fa5eae1575fc40b6f3240fe6ea8cafe6fd89a64e092e5b4602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0b9cac3faa5bfffa911cb16b70fa88a320b7bd9314d7a0ee0732b2a57afb90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.645714 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.656657 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7206174b-645b-4924-8345-d1d4b1a5ec39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b6k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.673786 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98379eae-150a-49e4-bc5a-774db567b411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1680b0766cf32cd9af06a1636274ebdc0e1a0eb1ef8ebf2dd5af50a426593936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c647364c951a6adef887ffa61edec540e1ba09f957cffaf60aa4e2fb6ecaa22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e13016eff40608d9a7f5dbdbd6e4faa7b21b965957c062bfd1c40b04d582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85486406cb9ccb97ccb382e44c3c4372c54609d367aeec7a04ddfa06424c9cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5777a20697086ac1eaf7dd01c471658a6ea96751fc9184d7bc2597777d86949a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e4d315b1c424660a2a02ab7882b4d25e0baa2407cbcc9efab29adf052733231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e4d315b1c424660a2a02ab7882b4d25e0baa2407cbcc9efab29adf052733231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6718fb3f6cc2532e0ed35f4a37eb39738cd75a5f20f85e778dec867a620eba6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6718fb3f6cc2532e0ed35f4a37eb39738cd75a5f20f85e778dec867a620eba6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://185d95c4c216a23ddee54c001dee313a17659c22037a5f60772d4449bd8fdd08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185d95c4c216a23ddee54c001dee313a17659c22037a5f60772d4449bd8fdd08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.687090 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2556cf2433d1b1241d711139b8c66aabe3f12046f37c0f19b972b8306ff7917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:38Z\\\",\\\"message\\\":\\\"2026-02-14T04:09:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a3f597f2-b921-47ce-8faa-6d588a62271b\\\\n2026-02-14T04:09:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a3f597f2-b921-47ce-8faa-6d588a62271b to /host/opt/cni/bin/\\\\n2026-02-14T04:09:53Z [verbose] multus-daemon started\\\\n2026-02-14T04:09:53Z [verbose] Readiness Indicator file check\\\\n2026-02-14T04:10:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.700207 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.711729 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.712783 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.712821 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.712830 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.712846 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.712856 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:52Z","lastTransitionTime":"2026-02-14T04:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.724630 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.735360 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.746165 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.757172 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:52Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.815711 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.815765 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.815773 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.815787 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.815798 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:52Z","lastTransitionTime":"2026-02-14T04:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.918143 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.918192 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.918201 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.918216 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.918225 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:52Z","lastTransitionTime":"2026-02-14T04:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.996856 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.996911 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.996942 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:52 crc kubenswrapper[4867]: E0214 04:10:52.997065 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:52 crc kubenswrapper[4867]: I0214 04:10:52.997101 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:52 crc kubenswrapper[4867]: E0214 04:10:52.997238 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:52 crc kubenswrapper[4867]: E0214 04:10:52.997385 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:52 crc kubenswrapper[4867]: E0214 04:10:52.997622 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.020668 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.020717 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.020734 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.020755 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.020773 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:53Z","lastTransitionTime":"2026-02-14T04:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.123919 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.123996 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.124013 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.124038 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.124054 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:53Z","lastTransitionTime":"2026-02-14T04:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.152224 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 12:32:13.672254515 +0000 UTC Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.227796 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.227841 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.227853 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.227871 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.227884 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:53Z","lastTransitionTime":"2026-02-14T04:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.330033 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.330066 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.330074 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.330088 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.330096 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:53Z","lastTransitionTime":"2026-02-14T04:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.432787 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.432839 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.432849 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.432866 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.432877 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:53Z","lastTransitionTime":"2026-02-14T04:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.534819 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.534849 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.534857 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.534869 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.534879 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:53Z","lastTransitionTime":"2026-02-14T04:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.538328 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovnkube-controller/3.log" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.637743 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.637793 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.637811 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.637833 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.637851 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:53Z","lastTransitionTime":"2026-02-14T04:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.656278 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.656349 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.656483 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.656672 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.656708 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:53Z","lastTransitionTime":"2026-02-14T04:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:53 crc kubenswrapper[4867]: E0214 04:10:53.677386 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.681940 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.681961 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.681971 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.681985 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.681996 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:53Z","lastTransitionTime":"2026-02-14T04:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:53 crc kubenswrapper[4867]: E0214 04:10:53.697704 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.701892 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.701953 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.701978 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.702004 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.702025 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:53Z","lastTransitionTime":"2026-02-14T04:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:53 crc kubenswrapper[4867]: E0214 04:10:53.720680 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.724329 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.724367 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.724380 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.724397 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.724411 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:53Z","lastTransitionTime":"2026-02-14T04:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:53 crc kubenswrapper[4867]: E0214 04:10:53.743019 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.746780 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.746825 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.746840 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.746860 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.746874 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:53Z","lastTransitionTime":"2026-02-14T04:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:53 crc kubenswrapper[4867]: E0214 04:10:53.761216 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:53Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:53 crc kubenswrapper[4867]: E0214 04:10:53.761371 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.762944 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.762980 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.762995 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.763014 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.763030 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:53Z","lastTransitionTime":"2026-02-14T04:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.865040 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.865080 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.865090 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.865110 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.865123 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:53Z","lastTransitionTime":"2026-02-14T04:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.968357 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.968423 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.968445 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.968467 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:53 crc kubenswrapper[4867]: I0214 04:10:53.968485 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:53Z","lastTransitionTime":"2026-02-14T04:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.071733 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.071806 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.071833 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.071862 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.071883 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:54Z","lastTransitionTime":"2026-02-14T04:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.152864 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 08:01:11.664243721 +0000 UTC Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.175079 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.175132 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.175151 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.175175 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.175193 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:54Z","lastTransitionTime":"2026-02-14T04:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.278207 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.278275 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.278299 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.278333 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.278354 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:54Z","lastTransitionTime":"2026-02-14T04:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.381040 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.381081 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.381089 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.381103 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.381113 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:54Z","lastTransitionTime":"2026-02-14T04:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.484306 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.484381 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.484393 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.484411 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.484449 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:54Z","lastTransitionTime":"2026-02-14T04:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.586894 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.586934 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.586962 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.586978 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.586990 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:54Z","lastTransitionTime":"2026-02-14T04:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.689958 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.689992 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.690003 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.690019 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.690030 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:54Z","lastTransitionTime":"2026-02-14T04:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.793319 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.793359 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.793369 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.793386 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.793397 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:54Z","lastTransitionTime":"2026-02-14T04:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.896239 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.896303 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.896323 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.896346 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.896362 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:54Z","lastTransitionTime":"2026-02-14T04:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.996156 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.996235 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:54 crc kubenswrapper[4867]: E0214 04:10:54.996305 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.996328 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.996387 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:54 crc kubenswrapper[4867]: E0214 04:10:54.996545 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:54 crc kubenswrapper[4867]: E0214 04:10:54.996670 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:54 crc kubenswrapper[4867]: E0214 04:10:54.996808 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.998995 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.999019 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.999032 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.999046 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:54 crc kubenswrapper[4867]: I0214 04:10:54.999058 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:54Z","lastTransitionTime":"2026-02-14T04:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.017320 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.017479 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.017657 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:55 crc kubenswrapper[4867]: E0214 04:10:55.017724 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 04:10:55 crc kubenswrapper[4867]: E0214 04:10:55.017768 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 04:10:55 crc kubenswrapper[4867]: E0214 04:10:55.017770 4867 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 04:10:55 crc kubenswrapper[4867]: E0214 04:10:55.017791 4867 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:10:55 crc kubenswrapper[4867]: E0214 04:10:55.018116 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:59.017673752 +0000 UTC m=+151.098611076 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.018319 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:55 crc kubenswrapper[4867]: E0214 04:10:55.018382 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 04:11:59.01836285 +0000 UTC m=+151.099300164 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 14 04:10:55 crc kubenswrapper[4867]: E0214 04:10:55.018602 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-14 04:11:59.018581106 +0000 UTC m=+151.099518460 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.018637 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:55 crc kubenswrapper[4867]: E0214 04:10:55.018752 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 14 04:10:55 crc kubenswrapper[4867]: E0214 04:10:55.018771 4867 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 14 04:10:55 crc kubenswrapper[4867]: E0214 04:10:55.018787 4867 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:10:55 crc kubenswrapper[4867]: E0214 04:10:55.018833 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-14 04:11:59.018818832 +0000 UTC m=+151.099756176 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 14 04:10:55 crc kubenswrapper[4867]: E0214 04:10:55.018415 4867 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 04:10:55 crc kubenswrapper[4867]: E0214 04:10:55.019027 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-14 04:11:59.019014857 +0000 UTC m=+151.099952201 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.101576 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.101615 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.101627 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.101642 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.101654 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:55Z","lastTransitionTime":"2026-02-14T04:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.153974 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 03:47:39.383829598 +0000 UTC Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.203983 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.204052 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.204069 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.204091 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.204108 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:55Z","lastTransitionTime":"2026-02-14T04:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.307386 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.307472 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.307502 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.307566 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.307588 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:55Z","lastTransitionTime":"2026-02-14T04:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.410042 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.410090 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.410104 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.410127 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.410146 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:55Z","lastTransitionTime":"2026-02-14T04:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.512968 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.513033 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.513055 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.513085 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.513103 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:55Z","lastTransitionTime":"2026-02-14T04:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.615607 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.615655 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.615669 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.615688 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.615702 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:55Z","lastTransitionTime":"2026-02-14T04:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.718543 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.718586 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.718597 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.718617 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.718631 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:55Z","lastTransitionTime":"2026-02-14T04:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.821632 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.821682 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.821696 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.821719 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.821732 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:55Z","lastTransitionTime":"2026-02-14T04:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.924131 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.924172 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.924182 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.924196 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:55 crc kubenswrapper[4867]: I0214 04:10:55.924210 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:55Z","lastTransitionTime":"2026-02-14T04:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.026358 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.026397 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.026408 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.026426 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.026435 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:56Z","lastTransitionTime":"2026-02-14T04:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.128480 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.128546 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.128558 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.128575 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.128586 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:56Z","lastTransitionTime":"2026-02-14T04:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.154802 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 01:52:57.764848678 +0000 UTC Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.230466 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.230631 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.230693 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.230721 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.230740 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:56Z","lastTransitionTime":"2026-02-14T04:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.333795 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.333864 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.333883 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.333914 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.333934 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:56Z","lastTransitionTime":"2026-02-14T04:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.436548 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.436622 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.436647 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.436689 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.436718 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:56Z","lastTransitionTime":"2026-02-14T04:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.539570 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.539621 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.539630 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.539667 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.539680 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:56Z","lastTransitionTime":"2026-02-14T04:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.641823 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.641871 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.641882 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.641898 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.641911 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:56Z","lastTransitionTime":"2026-02-14T04:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.744208 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.744302 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.744319 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.744341 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.744358 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:56Z","lastTransitionTime":"2026-02-14T04:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.846536 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.846606 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.846615 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.846631 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.846639 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:56Z","lastTransitionTime":"2026-02-14T04:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.949982 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.950026 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.950040 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.950058 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.950069 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:56Z","lastTransitionTime":"2026-02-14T04:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.996390 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.996420 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.996463 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:56 crc kubenswrapper[4867]: I0214 04:10:56.996399 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:56 crc kubenswrapper[4867]: E0214 04:10:56.996602 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:56 crc kubenswrapper[4867]: E0214 04:10:56.996722 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:56 crc kubenswrapper[4867]: E0214 04:10:56.996886 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:56 crc kubenswrapper[4867]: E0214 04:10:56.996974 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.052157 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.052217 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.052225 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.052240 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.052273 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:57Z","lastTransitionTime":"2026-02-14T04:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.154392 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.154430 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.154438 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.154451 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.154459 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:57Z","lastTransitionTime":"2026-02-14T04:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.155500 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 05:38:33.848117862 +0000 UTC Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.257018 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.257066 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.257076 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.257092 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.257104 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:57Z","lastTransitionTime":"2026-02-14T04:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.359417 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.359451 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.359460 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.359475 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.359487 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:57Z","lastTransitionTime":"2026-02-14T04:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.462034 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.462080 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.462091 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.462106 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.462116 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:57Z","lastTransitionTime":"2026-02-14T04:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.564773 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.564823 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.564839 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.564862 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.564879 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:57Z","lastTransitionTime":"2026-02-14T04:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.667204 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.667241 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.667257 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.667278 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.667291 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:57Z","lastTransitionTime":"2026-02-14T04:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.771339 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.771435 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.771461 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.771496 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.771560 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:57Z","lastTransitionTime":"2026-02-14T04:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.875264 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.875337 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.875360 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.875389 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.875412 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:57Z","lastTransitionTime":"2026-02-14T04:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.979025 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.979091 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.979110 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.979136 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:57 crc kubenswrapper[4867]: I0214 04:10:57.979156 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:57Z","lastTransitionTime":"2026-02-14T04:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.082024 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.082098 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.082124 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.082153 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.082174 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:58Z","lastTransitionTime":"2026-02-14T04:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.156337 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 15:24:28.006731968 +0000 UTC Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.193583 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.193634 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.193651 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.193676 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.193694 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:58Z","lastTransitionTime":"2026-02-14T04:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.297698 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.297783 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.297806 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.297843 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.297868 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:58Z","lastTransitionTime":"2026-02-14T04:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.400868 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.400934 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.400944 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.400966 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.400978 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:58Z","lastTransitionTime":"2026-02-14T04:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.503864 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.504210 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.504328 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.504427 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.504561 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:58Z","lastTransitionTime":"2026-02-14T04:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.607392 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.607429 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.607440 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.607465 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.607479 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:58Z","lastTransitionTime":"2026-02-14T04:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.709713 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.709797 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.709818 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.709845 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.709865 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:58Z","lastTransitionTime":"2026-02-14T04:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.812405 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.812837 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.812971 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.813130 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.813275 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:58Z","lastTransitionTime":"2026-02-14T04:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.916116 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.916158 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.916167 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.916181 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.916191 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:58Z","lastTransitionTime":"2026-02-14T04:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.996975 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.997019 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.996983 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:10:58 crc kubenswrapper[4867]: I0214 04:10:58.997100 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:10:58 crc kubenswrapper[4867]: E0214 04:10:58.997207 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:10:58 crc kubenswrapper[4867]: E0214 04:10:58.997398 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:10:58 crc kubenswrapper[4867]: E0214 04:10:58.997474 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:10:58 crc kubenswrapper[4867]: E0214 04:10:58.997715 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.013764 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.019466 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.019537 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.019553 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.019575 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.019588 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:59Z","lastTransitionTime":"2026-02-14T04:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.029906 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.048214 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.061877 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.075460 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.086580 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.099656 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.121988 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.122049 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.122065 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.122129 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.122147 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:59Z","lastTransitionTime":"2026-02-14T04:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.127079 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e1fa8b3d99d969cac9ac1d4bdd1161353186d3cf50512e692adeee0f21778a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://901f1924f11611b25b82799b2f09cf1c83f31dada8ce10e3fabf0d2968107b93\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:20Z\\\",\\\"message\\\":\\\".go:160\\\\nI0214 04:10:20.016141 6511 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016279 6511 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0214 04:10:20.016402 6511 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0214 04:10:20.016537 6511 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0214 04:10:20.016835 6511 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0214 04:10:20.016886 6511 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0214 04:10:20.016916 6511 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0214 04:10:20.016948 6511 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0214 04:10:20.016968 6511 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0214 04:10:20.016972 6511 factory.go:656] Stopping watch factory\\\\nI0214 04:10:20.016992 6511 ovnkube.go:599] Stopped ovnkube\\\\nI0214 04:10:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e1fa8b3d99d969cac9ac1d4bdd1161353186d3cf50512e692adeee0f21778a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:52Z\\\",\\\"message\\\":\\\"et:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0214 04:10:52.432610 6954 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0214 04:10:52.432921 6954 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0214 04:10:52.432929 6954 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0214 04:10:52.432932 6954 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?time\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.139066 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.156239 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.156521 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 07:16:15.549282464 +0000 UTC Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.170220 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7eff54a-2d26-4335-ad76-c454354b64c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61da3ab9eb87eb886d6bdf805db38bcabc3db4334167f9e28fd6144269a76515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54a1f41a2a0e8fa5eae1575fc40b6f3240fe6ea8cafe6fd89a64e092e5b4602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0b9cac3faa5bfffa911cb16b70fa88a320b7bd9314d7a0ee0732b2a57afb90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.183963 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.196582 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.217444 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98379eae-150a-49e4-bc5a-774db567b411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1680b0766cf32cd9af06a1636274ebdc0e1a0eb1ef8ebf2dd5af50a426593936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c647364c951a6adef887ffa61edec540e1ba09f957cffaf60aa4e2fb6ecaa22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e13016eff40608d9a7f5dbdbd6e4faa7b21b965957c062bfd1c40b04d582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85486406cb9ccb97ccb382e44c3c4372c54609d367aeec7a04ddfa06424c9cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5777a20697086ac1eaf7dd01c471658a6ea96751fc9184d7bc2597777d86949a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e4d315b1c424660a2a02ab7882b4d25e0baa2407cbcc9efab29adf052733231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e4d315b1c424660a2a02ab7882b4d25e0baa2407cbcc9efab29adf052733231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6718fb3f6cc2532e0ed35f4a37eb39738cd75a5f20f85e778dec867a620eba6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6718fb3f6cc2532e0ed35f4a37eb39738cd75a5f20f85e778dec867a620eba6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://185d95c4c216a23ddee54c001dee313a17659c22037a5f60772d4449bd8fdd08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185d95c4c216a23ddee54c001dee313a17659c22037a5f60772d4449bd8fdd08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.224611 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.224670 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.224695 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.224720 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.224736 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:59Z","lastTransitionTime":"2026-02-14T04:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.231482 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2556cf2433d1b1241d711139b8c66aabe3f12046f37c0f19b972b8306ff7917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:38Z\\\",\\\"message\\\":\\\"2026-02-14T04:09:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a3f597f2-b921-47ce-8faa-6d588a62271b\\\\n2026-02-14T04:09:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a3f597f2-b921-47ce-8faa-6d588a62271b to /host/opt/cni/bin/\\\\n2026-02-14T04:09:53Z [verbose] multus-daemon started\\\\n2026-02-14T04:09:53Z [verbose] Readiness Indicator file check\\\\n2026-02-14T04:10:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.246627 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.259006 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.269182 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7206174b-645b-4924-8345-d1d4b1a5ec39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b6k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:10:59Z is after 2025-08-24T17:21:41Z" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.327040 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.327083 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.327097 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.327115 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.327128 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:59Z","lastTransitionTime":"2026-02-14T04:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.429646 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.429685 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.429699 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.429716 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.429728 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:59Z","lastTransitionTime":"2026-02-14T04:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.532206 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.532271 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.532296 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.532325 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.532343 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:59Z","lastTransitionTime":"2026-02-14T04:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.634758 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.634796 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.634805 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.634817 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.634825 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:59Z","lastTransitionTime":"2026-02-14T04:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.737200 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.737271 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.737288 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.737311 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.737330 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:59Z","lastTransitionTime":"2026-02-14T04:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.839767 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.839796 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.839805 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.839818 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.839827 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:59Z","lastTransitionTime":"2026-02-14T04:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.943416 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.943591 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.943613 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.943644 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:10:59 crc kubenswrapper[4867]: I0214 04:10:59.943663 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:10:59Z","lastTransitionTime":"2026-02-14T04:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.045579 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.045636 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.045650 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.045671 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.045682 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:00Z","lastTransitionTime":"2026-02-14T04:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.147946 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.148423 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.148549 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.148660 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.148753 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:00Z","lastTransitionTime":"2026-02-14T04:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.157279 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 13:09:32.773233552 +0000 UTC Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.251026 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.251699 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.251729 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.251754 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.251789 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:00Z","lastTransitionTime":"2026-02-14T04:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.354802 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.354880 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.354899 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.354925 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.354943 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:00Z","lastTransitionTime":"2026-02-14T04:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.457767 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.457820 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.457834 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.457856 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.457870 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:00Z","lastTransitionTime":"2026-02-14T04:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.561086 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.561133 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.561148 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.561170 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.561182 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:00Z","lastTransitionTime":"2026-02-14T04:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.663546 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.663623 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.663647 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.663680 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.663711 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:00Z","lastTransitionTime":"2026-02-14T04:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.766942 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.766999 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.767015 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.767036 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.767054 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:00Z","lastTransitionTime":"2026-02-14T04:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.870222 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.870591 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.870676 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.870790 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.870899 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:00Z","lastTransitionTime":"2026-02-14T04:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.973328 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.973565 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.973662 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.973732 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.973792 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:00Z","lastTransitionTime":"2026-02-14T04:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.996238 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.996238 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.996365 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:00 crc kubenswrapper[4867]: I0214 04:11:00.996591 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:00 crc kubenswrapper[4867]: E0214 04:11:00.996749 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:00 crc kubenswrapper[4867]: E0214 04:11:00.996894 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:00 crc kubenswrapper[4867]: E0214 04:11:00.997029 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:00 crc kubenswrapper[4867]: E0214 04:11:00.997154 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.076524 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.076566 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.076578 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.076593 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.076605 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:01Z","lastTransitionTime":"2026-02-14T04:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.158109 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 17:50:45.83111862 +0000 UTC Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.180819 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.181195 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.181360 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.181501 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.181680 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:01Z","lastTransitionTime":"2026-02-14T04:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.284938 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.285753 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.285897 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.286029 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.286152 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:01Z","lastTransitionTime":"2026-02-14T04:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.389081 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.389381 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.389570 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.389828 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.390035 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:01Z","lastTransitionTime":"2026-02-14T04:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.493278 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.493347 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.493368 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.493398 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.493419 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:01Z","lastTransitionTime":"2026-02-14T04:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.596135 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.596589 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.596776 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.596998 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.597159 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:01Z","lastTransitionTime":"2026-02-14T04:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.700817 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.700889 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.700910 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.700945 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.700965 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:01Z","lastTransitionTime":"2026-02-14T04:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.804878 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.804957 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.804977 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.805007 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.805029 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:01Z","lastTransitionTime":"2026-02-14T04:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.909077 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.909407 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.909559 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.909683 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:01 crc kubenswrapper[4867]: I0214 04:11:01.909830 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:01Z","lastTransitionTime":"2026-02-14T04:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.013032 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.013106 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.013126 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.013152 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.013170 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:02Z","lastTransitionTime":"2026-02-14T04:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.014905 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.116878 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.116952 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.116975 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.117003 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.117025 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:02Z","lastTransitionTime":"2026-02-14T04:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.158685 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 23:57:41.488149865 +0000 UTC Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.220620 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.220689 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.220708 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.220741 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.220763 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:02Z","lastTransitionTime":"2026-02-14T04:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.324988 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.325062 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.325084 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.325112 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.325132 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:02Z","lastTransitionTime":"2026-02-14T04:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.428200 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.428280 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.428305 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.428340 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.428407 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:02Z","lastTransitionTime":"2026-02-14T04:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.532065 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.532139 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.532166 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.532203 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.532229 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:02Z","lastTransitionTime":"2026-02-14T04:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.636098 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.636146 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.636159 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.636180 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.636196 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:02Z","lastTransitionTime":"2026-02-14T04:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.738561 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.738627 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.738650 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.738677 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.738695 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:02Z","lastTransitionTime":"2026-02-14T04:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.842219 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.842560 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.842691 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.842833 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.842959 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:02Z","lastTransitionTime":"2026-02-14T04:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.946834 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.947215 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.947373 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.947577 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:02 crc kubenswrapper[4867]: I0214 04:11:02.947759 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:02Z","lastTransitionTime":"2026-02-14T04:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.000898 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.000952 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.001048 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.000917 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:03 crc kubenswrapper[4867]: E0214 04:11:03.001153 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:03 crc kubenswrapper[4867]: E0214 04:11:03.001288 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:03 crc kubenswrapper[4867]: E0214 04:11:03.001427 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:03 crc kubenswrapper[4867]: E0214 04:11:03.001628 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.051210 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.051496 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.051786 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.051858 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.051927 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:03Z","lastTransitionTime":"2026-02-14T04:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.154861 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.154975 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.154997 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.155024 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.155041 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:03Z","lastTransitionTime":"2026-02-14T04:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.159751 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 07:26:24.286025591 +0000 UTC Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.258089 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.258620 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.258800 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.258961 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.259091 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:03Z","lastTransitionTime":"2026-02-14T04:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.362221 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.362251 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.362275 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.362288 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.362296 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:03Z","lastTransitionTime":"2026-02-14T04:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.464760 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.464799 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.464809 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.464847 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.464857 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:03Z","lastTransitionTime":"2026-02-14T04:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.567436 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.567479 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.567497 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.567532 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.567544 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:03Z","lastTransitionTime":"2026-02-14T04:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.669854 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.669903 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.669915 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.669934 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.669947 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:03Z","lastTransitionTime":"2026-02-14T04:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.772443 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.772487 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.772499 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.772541 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.772554 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:03Z","lastTransitionTime":"2026-02-14T04:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.875351 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.875400 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.875412 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.875430 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.875442 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:03Z","lastTransitionTime":"2026-02-14T04:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.919263 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.919337 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.919350 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.919398 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.919411 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:03Z","lastTransitionTime":"2026-02-14T04:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:03 crc kubenswrapper[4867]: E0214 04:11:03.937993 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.942703 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.942755 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.943691 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.943864 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.943907 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:03Z","lastTransitionTime":"2026-02-14T04:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:03 crc kubenswrapper[4867]: E0214 04:11:03.968059 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.973070 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.973137 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.973154 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.973181 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.973194 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:03Z","lastTransitionTime":"2026-02-14T04:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:03 crc kubenswrapper[4867]: E0214 04:11:03.992402 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:03Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.996115 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.996147 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.996159 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.996173 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:03 crc kubenswrapper[4867]: I0214 04:11:03.996185 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:03Z","lastTransitionTime":"2026-02-14T04:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:04 crc kubenswrapper[4867]: E0214 04:11:04.011010 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.014786 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.014819 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.014832 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.014848 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.014859 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:04Z","lastTransitionTime":"2026-02-14T04:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:04 crc kubenswrapper[4867]: E0214 04:11:04.031313 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:11:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-14T04:11:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"148e1364-0af4-4e1f-ae72-52166d888ddc\\\",\\\"systemUUID\\\":\\\"1382a0d3-8d29-4f25-bc2c-dc46ad541396\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:04Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:04 crc kubenswrapper[4867]: E0214 04:11:04.031472 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.033871 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.033909 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.033924 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.033942 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.033953 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:04Z","lastTransitionTime":"2026-02-14T04:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.136903 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.136951 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.136966 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.136983 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.136993 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:04Z","lastTransitionTime":"2026-02-14T04:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.160429 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 21:28:47.905212162 +0000 UTC Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.239758 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.239798 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.239806 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.239820 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.239831 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:04Z","lastTransitionTime":"2026-02-14T04:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.342253 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.342303 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.342313 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.342328 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.342340 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:04Z","lastTransitionTime":"2026-02-14T04:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.444600 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.444640 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.444651 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.444668 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.444680 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:04Z","lastTransitionTime":"2026-02-14T04:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.547977 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.548016 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.548043 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.548064 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.548080 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:04Z","lastTransitionTime":"2026-02-14T04:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.650647 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.650723 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.650743 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.650769 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.650788 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:04Z","lastTransitionTime":"2026-02-14T04:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.753310 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.753393 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.753412 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.753440 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.753460 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:04Z","lastTransitionTime":"2026-02-14T04:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.856181 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.856221 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.856229 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.856242 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.856251 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:04Z","lastTransitionTime":"2026-02-14T04:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.958827 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.958879 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.958890 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.958911 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.958923 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:04Z","lastTransitionTime":"2026-02-14T04:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.997264 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.997305 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.997315 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:04 crc kubenswrapper[4867]: I0214 04:11:04.997431 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:04 crc kubenswrapper[4867]: E0214 04:11:04.997426 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:04 crc kubenswrapper[4867]: E0214 04:11:04.997631 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:04 crc kubenswrapper[4867]: E0214 04:11:04.997653 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:04 crc kubenswrapper[4867]: E0214 04:11:04.997695 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.060970 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.061032 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.061051 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.061073 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.061090 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:05Z","lastTransitionTime":"2026-02-14T04:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.161441 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 00:03:05.286064072 +0000 UTC Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.163752 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.163797 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.163812 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.163835 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.163853 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:05Z","lastTransitionTime":"2026-02-14T04:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.266850 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.266925 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.266951 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.266981 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.267043 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:05Z","lastTransitionTime":"2026-02-14T04:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.370382 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.370473 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.370493 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.370590 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.370618 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:05Z","lastTransitionTime":"2026-02-14T04:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.473309 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.473972 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.474071 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.474179 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.474268 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:05Z","lastTransitionTime":"2026-02-14T04:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.577637 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.577666 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.577673 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.577685 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.577693 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:05Z","lastTransitionTime":"2026-02-14T04:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.679844 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.679889 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.679900 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.679916 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.679929 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:05Z","lastTransitionTime":"2026-02-14T04:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.782219 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.782294 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.782318 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.782350 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.782372 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:05Z","lastTransitionTime":"2026-02-14T04:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.885747 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.885842 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.885868 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.885900 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.885924 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:05Z","lastTransitionTime":"2026-02-14T04:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.988690 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.988756 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.988774 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.988798 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.988815 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:05Z","lastTransitionTime":"2026-02-14T04:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:05 crc kubenswrapper[4867]: I0214 04:11:05.999558 4867 scope.go:117] "RemoveContainer" containerID="97e1fa8b3d99d969cac9ac1d4bdd1161353186d3cf50512e692adeee0f21778a" Feb 14 04:11:06 crc kubenswrapper[4867]: E0214 04:11:06.000249 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.014759 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.028978 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7206174b-645b-4924-8345-d1d4b1a5ec39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b6k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.065975 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98379eae-150a-49e4-bc5a-774db567b411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1680b0766cf32cd9af06a1636274ebdc0e1a0eb1ef8ebf2dd5af50a426593936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c647364c951a6adef887ffa61edec540e1ba09f957cffaf60aa4e2fb6ecaa22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e13016eff40608d9a7f5dbdbd6e4faa7b21b965957c062bfd1c40b04d582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85486406cb9ccb97ccb382e44c3c4372c54609d367aeec7a04ddfa06424c9cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5777a20697086ac1eaf7dd01c471658a6ea96751fc9184d7bc2597777d86949a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e4d315b1c424660a2a02ab7882b4d25e0baa2407cbcc9efab29adf052733231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e4d315b1c424660a2a02ab7882b4d25e0baa2407cbcc9efab29adf052733231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6718fb3f6cc2532e0ed35f4a37eb39738cd75a5f20f85e778dec867a620eba6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6718fb3f6cc2532e0ed35f4a37eb39738cd75a5f20f85e778dec867a620eba6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://185d95c4c216a23ddee54c001dee313a17659c22037a5f60772d4449bd8fdd08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185d95c4c216a23ddee54c001dee313a17659c22037a5f60772d4449bd8fdd08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.080303 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2556cf2433d1b1241d711139b8c66aabe3f12046f37c0f19b972b8306ff7917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:38Z\\\",\\\"message\\\":\\\"2026-02-14T04:09:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a3f597f2-b921-47ce-8faa-6d588a62271b\\\\n2026-02-14T04:09:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a3f597f2-b921-47ce-8faa-6d588a62271b to /host/opt/cni/bin/\\\\n2026-02-14T04:09:53Z [verbose] multus-daemon started\\\\n2026-02-14T04:09:53Z [verbose] Readiness Indicator file check\\\\n2026-02-14T04:10:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.092098 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.092194 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.092219 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.092745 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.093033 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:06Z","lastTransitionTime":"2026-02-14T04:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.101120 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.113694 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.133718 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.150130 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.162388 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 13:37:09.047778055 +0000 UTC Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.164620 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.184388 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e1fa8b3d99d969cac9ac1d4bdd1161353186d3cf50512e692adeee0f21778a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e1fa8b3d99d969cac9ac1d4bdd1161353186d3cf50512e692adeee0f21778a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:52Z\\\",\\\"message\\\":\\\"et:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0214 04:10:52.432610 6954 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0214 04:10:52.432921 6954 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0214 04:10:52.432929 6954 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0214 04:10:52.432932 6954 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?time\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.195108 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.195785 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.195828 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.195840 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.195859 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.195873 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:06Z","lastTransitionTime":"2026-02-14T04:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.209220 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.221914 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.235890 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.248220 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.263495 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.277290 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.289371 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7eff54a-2d26-4335-ad76-c454354b64c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61da3ab9eb87eb886d6bdf805db38bcabc3db4334167f9e28fd6144269a76515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54a1f41a2a0e8fa5eae1575fc40b6f3240fe6ea8cafe6fd89a64e092e5b4602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0b9cac3faa5bfffa911cb16b70fa88a320b7bd9314d7a0ee0732b2a57afb90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.298835 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.298878 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.298891 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.298907 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.298918 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:06Z","lastTransitionTime":"2026-02-14T04:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.300196 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96d081a5-08ac-4716-b6ab-64959cf2933f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a23e7ed290c1546350cfd89f40731062a0bbfc60ee74489cb0fc243bb8187f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://313dd94a6a60cea26237126b4d80e162ff2866b335e74ba876fa919f2950922e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://313dd94a6a60cea26237126b4d80e162ff2866b335e74ba876fa919f2950922e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:06Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.401753 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.401802 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.401812 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.401828 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.401954 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:06Z","lastTransitionTime":"2026-02-14T04:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.504429 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.504461 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.504472 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.504488 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.504499 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:06Z","lastTransitionTime":"2026-02-14T04:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.606445 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.606490 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.606498 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.606776 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.606789 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:06Z","lastTransitionTime":"2026-02-14T04:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.709221 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.709282 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.709291 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.709305 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.709313 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:06Z","lastTransitionTime":"2026-02-14T04:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.811327 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.811360 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.811368 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.811382 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.811391 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:06Z","lastTransitionTime":"2026-02-14T04:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.913735 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.913771 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.913782 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.913797 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.913808 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:06Z","lastTransitionTime":"2026-02-14T04:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.996818 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.996876 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.996907 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:06 crc kubenswrapper[4867]: I0214 04:11:06.996992 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:06 crc kubenswrapper[4867]: E0214 04:11:06.996998 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:06 crc kubenswrapper[4867]: E0214 04:11:06.997056 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:06 crc kubenswrapper[4867]: E0214 04:11:06.997082 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:06 crc kubenswrapper[4867]: E0214 04:11:06.997105 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.016272 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.016316 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.016326 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.016342 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.016352 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:07Z","lastTransitionTime":"2026-02-14T04:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.119197 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.119248 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.119266 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.119288 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.119306 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:07Z","lastTransitionTime":"2026-02-14T04:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.163464 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 15:05:50.328609962 +0000 UTC Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.221675 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.221744 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.221756 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.221775 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.221788 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:07Z","lastTransitionTime":"2026-02-14T04:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.324056 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.324115 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.324133 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.324156 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.324173 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:07Z","lastTransitionTime":"2026-02-14T04:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.426970 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.427024 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.427040 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.427214 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.427370 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:07Z","lastTransitionTime":"2026-02-14T04:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.530326 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.530409 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.530428 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.530461 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.530483 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:07Z","lastTransitionTime":"2026-02-14T04:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.633775 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.633855 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.633873 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.633900 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.633968 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:07Z","lastTransitionTime":"2026-02-14T04:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.737020 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.737064 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.737073 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.737089 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.737099 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:07Z","lastTransitionTime":"2026-02-14T04:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.842326 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.842395 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.842414 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.842442 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.842461 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:07Z","lastTransitionTime":"2026-02-14T04:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.946066 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.946126 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.946139 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.946159 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:07 crc kubenswrapper[4867]: I0214 04:11:07.946172 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:07Z","lastTransitionTime":"2026-02-14T04:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.051396 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.051458 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.051475 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.051499 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.051550 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:08Z","lastTransitionTime":"2026-02-14T04:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.154977 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.155049 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.155068 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.155095 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.155116 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:08Z","lastTransitionTime":"2026-02-14T04:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.164609 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 05:02:11.701766968 +0000 UTC Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.258590 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.258661 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.258684 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.258717 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.258742 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:08Z","lastTransitionTime":"2026-02-14T04:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.362057 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.362131 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.362149 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.362176 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.362196 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:08Z","lastTransitionTime":"2026-02-14T04:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.465832 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.465965 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.466000 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.466039 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.466066 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:08Z","lastTransitionTime":"2026-02-14T04:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.568933 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.568985 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.569000 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.569021 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.569035 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:08Z","lastTransitionTime":"2026-02-14T04:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.671453 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.671537 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.671558 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.671579 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.671592 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:08Z","lastTransitionTime":"2026-02-14T04:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.773530 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.773653 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.773673 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.773703 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.773721 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:08Z","lastTransitionTime":"2026-02-14T04:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.875364 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.875406 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.875418 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.875434 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.875445 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:08Z","lastTransitionTime":"2026-02-14T04:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.977997 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.978062 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.978080 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.978110 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.978130 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:08Z","lastTransitionTime":"2026-02-14T04:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.996762 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.996765 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.996773 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:08 crc kubenswrapper[4867]: I0214 04:11:08.996894 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:08 crc kubenswrapper[4867]: E0214 04:11:08.997095 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:08 crc kubenswrapper[4867]: E0214 04:11:08.997249 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:08 crc kubenswrapper[4867]: E0214 04:11:08.997549 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:08 crc kubenswrapper[4867]: E0214 04:11:08.997747 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.015782 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7206174b-645b-4924-8345-d1d4b1a5ec39\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-272vg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4b6k5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.052922 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98379eae-150a-49e4-bc5a-774db567b411\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1680b0766cf32cd9af06a1636274ebdc0e1a0eb1ef8ebf2dd5af50a426593936\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c647364c951a6adef887ffa61edec540e1ba09f957cffaf60aa4e2fb6ecaa22d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f07e13016eff40608d9a7f5dbdbd6e4faa7b21b965957c062bfd1c40b04d582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85486406cb9ccb97ccb382e44c3c4372c54609d367aeec7a04ddfa06424c9cd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5777a20697086ac1eaf7dd01c471658a6ea96751fc9184d7bc2597777d86949a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e4d315b1c424660a2a02ab7882b4d25e0baa2407cbcc9efab29adf052733231\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e4d315b1c424660a2a02ab7882b4d25e0baa2407cbcc9efab29adf052733231\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6718fb3f6cc2532e0ed35f4a37eb39738cd75a5f20f85e778dec867a620eba6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6718fb3f6cc2532e0ed35f4a37eb39738cd75a5f20f85e778dec867a620eba6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://185d95c4c216a23ddee54c001dee313a17659c22037a5f60772d4449bd8fdd08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://185d95c4c216a23ddee54c001dee313a17659c22037a5f60772d4449bd8fdd08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.070316 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fl729" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fb77d03e-6ead-48b5-a96a-db4cbd540192\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2556cf2433d1b1241d711139b8c66aabe3f12046f37c0f19b972b8306ff7917b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:38Z\\\",\\\"message\\\":\\\"2026-02-14T04:09:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a3f597f2-b921-47ce-8faa-6d588a62271b\\\\n2026-02-14T04:09:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a3f597f2-b921-47ce-8faa-6d588a62271b to /host/opt/cni/bin/\\\\n2026-02-14T04:09:53Z [verbose] multus-daemon started\\\\n2026-02-14T04:09:53Z [verbose] Readiness Indicator file check\\\\n2026-02-14T04:10:38Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gznnx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fl729\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.081967 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.082043 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.082065 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.082108 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.082134 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:09Z","lastTransitionTime":"2026-02-14T04:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.089145 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9st5b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d645541b-4940-4e53-a506-1b42bd296dfb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80e2ddc09dadcbbbecee7addee881a393497c7456c1ab3fd4ec4b870d86e87ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://feb7781cdeaa9630cf43de5bccfe8b6b1c75511e3d5367c9713013f53c1c5bf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a413af7df0d352ae0577b49063be30eee5907c64a9ec4e6ed665519d372018d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://26cb023b1c5ece8cf7f2d539342fc934faac5f25288fd9c64af98b58c9090dd2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9eb2677284155e93284d850a007114f8bc957ea4e8b7b698425863cfa19956ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0073377386f68c5c2037c33e4763d2f20f3dd782955d78aa5695ed2b013ae57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84c2a64e464d7a238c1f805bf5912e0c6f43cb1c839c36712bc44ec0c8acd8d7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nd8lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9st5b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.109150 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05957e01-c589-4408-8f80-cd33f8856262\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9bb1c0677bd48bd254b78efc670de4cf3c1a2ae1a5dde8bcdc4d84ff4524b847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3962042c51f3b88c029c3ee23ee5704544b33af6a41463e864d81409a6f6845f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:10:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:10:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-dbvwr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.130154 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5aa8290-4924-4bc2-bd8e-576e53fa4216\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-14T04:09:45Z\\\",\\\"message\\\":\\\"W0214 04:09:34.347288 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0214 04:09:34.347626 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771042174 cert, and key in /tmp/serving-cert-184764736/serving-signer.crt, /tmp/serving-cert-184764736/serving-signer.key\\\\nI0214 04:09:34.829829 1 observer_polling.go:159] Starting file observer\\\\nW0214 04:09:34.832051 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0214 04:09:34.832310 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0214 04:09:34.835332 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-184764736/tls.crt::/tmp/serving-cert-184764736/tls.key\\\\\\\"\\\\nF0214 04:09:45.190789 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:34Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.147764 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb2334a367dde5688d19979264cfd6e67f44426ae7cd249c0b0e18b7e889c8c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.165539 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 06:03:53.011435318 +0000 UTC Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.165976 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.182071 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5992e46c-bce7-4b9f-82f2-c7ffb93286cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945e51e35cb7125361ec74b9c291782c9bc28f0c319ca5c90a88c27540d6ad95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brktz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4s95t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.186928 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.186981 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.186998 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.187023 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.187041 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:09Z","lastTransitionTime":"2026-02-14T04:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.194542 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qbv2g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e55b70fd-de82-48c9-b879-de727928e084\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d6de20a8d6a8a1104338491af05cb4bad2960df3f3d41271922974f2bd0f355\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghrlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qbv2g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.209410 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b12a920eac3a6bb901e1eb5b3f4ec399de4fb28f20cd73bdcf463730ccc78bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a373e25fceabb99332a08d8c1928aa6023c103d488a1f02a57b3157eceb75e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.226617 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-l6v69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2afb01bb-2288-4e50-aa66-3e5f2685af58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a109f9fc7a2ea765543b2d1437ad5eccddd0ccb0542b1ffe6a67490057d6d41e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-64stb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-l6v69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.246312 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.278570 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34391a30-5865-46e9-af5f-705cc3b11fba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://97e1fa8b3d99d969cac9ac1d4bdd1161353186d3cf50512e692adeee0f21778a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97e1fa8b3d99d969cac9ac1d4bdd1161353186d3cf50512e692adeee0f21778a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-14T04:10:52Z\\\",\\\"message\\\":\\\"et:[0a:58:0a:d9:00:5c 10.217.0.92]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c94130be-172c-477c-88c4-40cc7eba30fe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0214 04:10:52.432610 6954 ovn.go:134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI0214 04:10:52.432921 6954 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI0214 04:10:52.432929 6954 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF0214 04:10:52.432932 6954 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?time\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-14T04:10:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kmqj7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6nndn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.291151 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.291246 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.291267 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.291294 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.291312 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:09Z","lastTransitionTime":"2026-02-14T04:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.302328 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.326254 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e2ca498f-e329-422d-8b40-abb4d86f9b5a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec966f4b2a6aef7743d32f976a12645c5b0feda623f7baf64edf02bc35389e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b61ad62a4304538cec45962a9672a69b853848bbfcbce460811135c2ffde4849\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c04ba7033e9c86439f79a30f5ac92368859a69c6b8d46aa6e05ca42fbc37839\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.348917 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a7eff54a-2d26-4335-ad76-c454354b64c0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:10:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61da3ab9eb87eb886d6bdf805db38bcabc3db4334167f9e28fd6144269a76515\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c54a1f41a2a0e8fa5eae1575fc40b6f3240fe6ea8cafe6fd89a64e092e5b4602\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8f0b9cac3faa5bfffa911cb16b70fa88a320b7bd9314d7a0ee0732b2a57afb90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9361fb0bab2f70eaf2adc19e3fbfa9066fd7ad2fe0c94cd1a13518d2ab3708d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.361281 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"96d081a5-08ac-4716-b6ab-64959cf2933f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62a23e7ed290c1546350cfd89f40731062a0bbfc60ee74489cb0fc243bb8187f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://313dd94a6a60cea26237126b4d80e162ff2866b335e74ba876fa919f2950922e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://313dd94a6a60cea26237126b4d80e162ff2866b335e74ba876fa919f2950922e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-14T04:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-14T04:09:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-14T04:09:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.379985 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs\") pod \"network-metrics-daemon-4b6k5\" (UID: \"7206174b-645b-4924-8345-d1d4b1a5ec39\") " pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:09 crc kubenswrapper[4867]: E0214 04:11:09.380219 4867 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 04:11:09 crc kubenswrapper[4867]: E0214 04:11:09.380349 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs podName:7206174b-645b-4924-8345-d1d4b1a5ec39 nodeName:}" failed. No retries permitted until 2026-02-14 04:12:13.380313277 +0000 UTC m=+165.461250591 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs") pod "network-metrics-daemon-4b6k5" (UID: "7206174b-645b-4924-8345-d1d4b1a5ec39") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.383679 4867 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-14T04:09:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab82cfd4c916a17bb5ae2454a121a8367c532dd78d0ae1e13c02868208b7c7fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-14T04:09:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-14T04:11:09Z is after 2025-08-24T17:21:41Z" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.394181 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.394306 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.394397 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.394462 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.394540 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:09Z","lastTransitionTime":"2026-02-14T04:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.497068 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.497152 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.497170 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.497192 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.497210 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:09Z","lastTransitionTime":"2026-02-14T04:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.599883 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.599953 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.599969 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.600030 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.600044 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:09Z","lastTransitionTime":"2026-02-14T04:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.702941 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.703001 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.703022 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.703047 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.703064 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:09Z","lastTransitionTime":"2026-02-14T04:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.805389 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.805453 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.805463 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.805478 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.805490 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:09Z","lastTransitionTime":"2026-02-14T04:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.908466 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.908655 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.908727 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.908753 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:09 crc kubenswrapper[4867]: I0214 04:11:09.908772 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:09Z","lastTransitionTime":"2026-02-14T04:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.012819 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.012863 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.012873 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.012916 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.012925 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:10Z","lastTransitionTime":"2026-02-14T04:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.116010 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.116057 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.116066 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.116081 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.116090 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:10Z","lastTransitionTime":"2026-02-14T04:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.166080 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 16:00:15.082804538 +0000 UTC Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.218842 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.218901 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.218911 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.218926 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.218937 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:10Z","lastTransitionTime":"2026-02-14T04:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.321902 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.321983 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.322001 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.322022 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.322036 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:10Z","lastTransitionTime":"2026-02-14T04:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.424460 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.424498 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.424530 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.424547 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.424559 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:10Z","lastTransitionTime":"2026-02-14T04:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.527058 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.527092 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.527103 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.527120 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.527132 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:10Z","lastTransitionTime":"2026-02-14T04:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.630812 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.630886 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.630906 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.630933 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.630952 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:10Z","lastTransitionTime":"2026-02-14T04:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.733735 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.733775 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.733785 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.733801 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.733813 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:10Z","lastTransitionTime":"2026-02-14T04:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.836998 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.837070 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.837089 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.837120 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.837140 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:10Z","lastTransitionTime":"2026-02-14T04:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.940093 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.940180 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.940207 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.940239 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.940265 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:10Z","lastTransitionTime":"2026-02-14T04:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.996745 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.996841 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.996922 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:10 crc kubenswrapper[4867]: E0214 04:11:10.997054 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:10 crc kubenswrapper[4867]: I0214 04:11:10.997101 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:10 crc kubenswrapper[4867]: E0214 04:11:10.997221 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:10 crc kubenswrapper[4867]: E0214 04:11:10.997241 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:10 crc kubenswrapper[4867]: E0214 04:11:10.997382 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.043048 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.043108 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.043123 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.043147 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.043171 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:11Z","lastTransitionTime":"2026-02-14T04:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.145474 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.145542 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.145565 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.145590 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.145605 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:11Z","lastTransitionTime":"2026-02-14T04:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.166387 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 13:57:45.934682728 +0000 UTC Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.248148 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.248191 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.248201 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.248218 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.248230 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:11Z","lastTransitionTime":"2026-02-14T04:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.351428 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.351534 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.351561 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.351589 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.351612 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:11Z","lastTransitionTime":"2026-02-14T04:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.454174 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.454283 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.454293 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.454305 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.454314 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:11Z","lastTransitionTime":"2026-02-14T04:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.558436 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.558546 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.558574 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.558602 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.558620 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:11Z","lastTransitionTime":"2026-02-14T04:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.661153 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.661193 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.661203 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.661219 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.661228 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:11Z","lastTransitionTime":"2026-02-14T04:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.763962 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.764007 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.764016 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.764032 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.764040 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:11Z","lastTransitionTime":"2026-02-14T04:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.866169 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.866222 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.866240 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.866262 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.866280 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:11Z","lastTransitionTime":"2026-02-14T04:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.969032 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.969098 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.969122 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.969152 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:11 crc kubenswrapper[4867]: I0214 04:11:11.969175 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:11Z","lastTransitionTime":"2026-02-14T04:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.072239 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.072310 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.072328 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.072355 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.072371 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:12Z","lastTransitionTime":"2026-02-14T04:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.167075 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 16:51:23.461573499 +0000 UTC Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.175044 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.175110 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.175128 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.175155 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.175173 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:12Z","lastTransitionTime":"2026-02-14T04:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.279126 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.279277 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.279307 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.279336 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.279360 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:12Z","lastTransitionTime":"2026-02-14T04:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.381550 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.381606 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.381617 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.381635 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.381648 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:12Z","lastTransitionTime":"2026-02-14T04:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.484846 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.484905 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.484923 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.484948 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.484964 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:12Z","lastTransitionTime":"2026-02-14T04:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.586932 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.586987 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.586996 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.587010 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.587019 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:12Z","lastTransitionTime":"2026-02-14T04:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.689857 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.689974 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.689995 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.690036 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.690059 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:12Z","lastTransitionTime":"2026-02-14T04:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.792646 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.792693 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.792710 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.792727 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.792743 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:12Z","lastTransitionTime":"2026-02-14T04:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.895861 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.895927 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.895946 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.895983 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.896001 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:12Z","lastTransitionTime":"2026-02-14T04:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.997243 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.997317 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.997263 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.997357 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:12 crc kubenswrapper[4867]: E0214 04:11:12.997471 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:12 crc kubenswrapper[4867]: E0214 04:11:12.997638 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:12 crc kubenswrapper[4867]: E0214 04:11:12.997940 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:12 crc kubenswrapper[4867]: E0214 04:11:12.998034 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.998977 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.999002 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.999012 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.999030 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:12 crc kubenswrapper[4867]: I0214 04:11:12.999041 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:12Z","lastTransitionTime":"2026-02-14T04:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.101054 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.101117 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.101139 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.101167 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.101187 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:13Z","lastTransitionTime":"2026-02-14T04:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.168054 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 05:12:37.541447037 +0000 UTC Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.203470 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.203566 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.203627 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.203651 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.203668 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:13Z","lastTransitionTime":"2026-02-14T04:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.306356 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.306409 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.306420 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.306436 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.306449 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:13Z","lastTransitionTime":"2026-02-14T04:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.410061 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.410103 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.410113 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.410127 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.410136 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:13Z","lastTransitionTime":"2026-02-14T04:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.513600 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.513655 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.513666 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.513683 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.513694 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:13Z","lastTransitionTime":"2026-02-14T04:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.616175 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.616246 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.616267 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.616294 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.616317 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:13Z","lastTransitionTime":"2026-02-14T04:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.718630 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.718677 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.718688 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.718704 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.718715 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:13Z","lastTransitionTime":"2026-02-14T04:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.821416 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.821474 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.821486 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.821526 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.821544 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:13Z","lastTransitionTime":"2026-02-14T04:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.924384 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.924415 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.924423 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.924436 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:13 crc kubenswrapper[4867]: I0214 04:11:13.924444 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:13Z","lastTransitionTime":"2026-02-14T04:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.028022 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.028105 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.028127 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.028153 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.028171 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:14Z","lastTransitionTime":"2026-02-14T04:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.130900 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.130979 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.131004 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.131029 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.131047 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:14Z","lastTransitionTime":"2026-02-14T04:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.168630 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 02:34:04.508544899 +0000 UTC Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.202038 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.202112 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.202134 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.202163 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.202185 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:14Z","lastTransitionTime":"2026-02-14T04:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.234993 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.235078 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.235103 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.235133 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.235153 4867 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-14T04:11:14Z","lastTransitionTime":"2026-02-14T04:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.268898 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7"] Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.269410 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.273755 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.274003 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.274155 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.274280 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.292414 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podStartSLOduration=84.292394151 podStartE2EDuration="1m24.292394151s" podCreationTimestamp="2026-02-14 04:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:14.292393471 +0000 UTC m=+106.373330815" watchObservedRunningTime="2026-02-14 04:11:14.292394151 +0000 UTC m=+106.373331475" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.330828 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=83.330804115 podStartE2EDuration="1m23.330804115s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:14.314207081 +0000 UTC m=+106.395144405" watchObservedRunningTime="2026-02-14 04:11:14.330804115 +0000 UTC m=+106.411741439" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.331052 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/079535bb-0b3b-4373-bdc5-6dbf0d926179-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-865l7\" (UID: \"079535bb-0b3b-4373-bdc5-6dbf0d926179\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.331247 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/079535bb-0b3b-4373-bdc5-6dbf0d926179-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-865l7\" (UID: \"079535bb-0b3b-4373-bdc5-6dbf0d926179\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.331281 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/079535bb-0b3b-4373-bdc5-6dbf0d926179-service-ca\") pod \"cluster-version-operator-5c965bbfc6-865l7\" (UID: \"079535bb-0b3b-4373-bdc5-6dbf0d926179\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.331303 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/079535bb-0b3b-4373-bdc5-6dbf0d926179-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-865l7\" (UID: \"079535bb-0b3b-4373-bdc5-6dbf0d926179\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.331388 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/079535bb-0b3b-4373-bdc5-6dbf0d926179-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-865l7\" (UID: \"079535bb-0b3b-4373-bdc5-6dbf0d926179\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.404627 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-qbv2g" podStartSLOduration=83.404605743 podStartE2EDuration="1m23.404605743s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:14.389142539 +0000 UTC m=+106.470079863" watchObservedRunningTime="2026-02-14 04:11:14.404605743 +0000 UTC m=+106.485543067" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.426728 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-l6v69" podStartSLOduration=84.426709271 podStartE2EDuration="1m24.426709271s" podCreationTimestamp="2026-02-14 04:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:14.42667261 +0000 UTC m=+106.507609934" watchObservedRunningTime="2026-02-14 04:11:14.426709271 +0000 UTC m=+106.507646585" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.432270 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/079535bb-0b3b-4373-bdc5-6dbf0d926179-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-865l7\" (UID: \"079535bb-0b3b-4373-bdc5-6dbf0d926179\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.432311 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/079535bb-0b3b-4373-bdc5-6dbf0d926179-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-865l7\" (UID: \"079535bb-0b3b-4373-bdc5-6dbf0d926179\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.432333 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/079535bb-0b3b-4373-bdc5-6dbf0d926179-service-ca\") pod \"cluster-version-operator-5c965bbfc6-865l7\" (UID: \"079535bb-0b3b-4373-bdc5-6dbf0d926179\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.432370 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/079535bb-0b3b-4373-bdc5-6dbf0d926179-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-865l7\" (UID: \"079535bb-0b3b-4373-bdc5-6dbf0d926179\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.432390 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/079535bb-0b3b-4373-bdc5-6dbf0d926179-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-865l7\" (UID: \"079535bb-0b3b-4373-bdc5-6dbf0d926179\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.432455 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/079535bb-0b3b-4373-bdc5-6dbf0d926179-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-865l7\" (UID: \"079535bb-0b3b-4373-bdc5-6dbf0d926179\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.432653 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/079535bb-0b3b-4373-bdc5-6dbf0d926179-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-865l7\" (UID: \"079535bb-0b3b-4373-bdc5-6dbf0d926179\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.433193 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/079535bb-0b3b-4373-bdc5-6dbf0d926179-service-ca\") pod \"cluster-version-operator-5c965bbfc6-865l7\" (UID: \"079535bb-0b3b-4373-bdc5-6dbf0d926179\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.439017 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/079535bb-0b3b-4373-bdc5-6dbf0d926179-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-865l7\" (UID: \"079535bb-0b3b-4373-bdc5-6dbf0d926179\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.447499 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/079535bb-0b3b-4373-bdc5-6dbf0d926179-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-865l7\" (UID: \"079535bb-0b3b-4373-bdc5-6dbf0d926179\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.481604 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=80.481589485 podStartE2EDuration="1m20.481589485s" podCreationTimestamp="2026-02-14 04:09:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:14.481231845 +0000 UTC m=+106.562169159" watchObservedRunningTime="2026-02-14 04:11:14.481589485 +0000 UTC m=+106.562526799" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.501225 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=49.501209057 podStartE2EDuration="49.501209057s" podCreationTimestamp="2026-02-14 04:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:14.493127376 +0000 UTC m=+106.574064690" watchObservedRunningTime="2026-02-14 04:11:14.501209057 +0000 UTC m=+106.582146371" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.501314 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=12.5013098 podStartE2EDuration="12.5013098s" podCreationTimestamp="2026-02-14 04:11:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:14.500993162 +0000 UTC m=+106.581930476" watchObservedRunningTime="2026-02-14 04:11:14.5013098 +0000 UTC m=+106.582247114" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.515253 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-dbvwr" podStartSLOduration=83.515230394 podStartE2EDuration="1m23.515230394s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:14.514485814 +0000 UTC m=+106.595423128" watchObservedRunningTime="2026-02-14 04:11:14.515230394 +0000 UTC m=+106.596167718" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.545683 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=23.545668709 podStartE2EDuration="23.545668709s" podCreationTimestamp="2026-02-14 04:10:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:14.544802586 +0000 UTC m=+106.625739900" watchObservedRunningTime="2026-02-14 04:11:14.545668709 +0000 UTC m=+106.626606023" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.557912 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-fl729" podStartSLOduration=84.557897019 podStartE2EDuration="1m24.557897019s" podCreationTimestamp="2026-02-14 04:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:14.557332824 +0000 UTC m=+106.638270138" watchObservedRunningTime="2026-02-14 04:11:14.557897019 +0000 UTC m=+106.638834333" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.576837 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-9st5b" podStartSLOduration=84.576818363 podStartE2EDuration="1m24.576818363s" podCreationTimestamp="2026-02-14 04:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:14.576004532 +0000 UTC m=+106.656941846" watchObservedRunningTime="2026-02-14 04:11:14.576818363 +0000 UTC m=+106.657755677" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.597456 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.615767 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" event={"ID":"079535bb-0b3b-4373-bdc5-6dbf0d926179","Type":"ContainerStarted","Data":"dd9eb63857a7b6f97d57091eaa0caa9a4cf1c61cd4592adf8a4e53b3ca48770b"} Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.997031 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.997076 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:14 crc kubenswrapper[4867]: E0214 04:11:14.997180 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.997234 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:14 crc kubenswrapper[4867]: I0214 04:11:14.997379 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:14 crc kubenswrapper[4867]: E0214 04:11:14.997380 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:14 crc kubenswrapper[4867]: E0214 04:11:14.997570 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:14 crc kubenswrapper[4867]: E0214 04:11:14.997652 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:15 crc kubenswrapper[4867]: I0214 04:11:15.169420 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 15:55:37.050201241 +0000 UTC Feb 14 04:11:15 crc kubenswrapper[4867]: I0214 04:11:15.169601 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 14 04:11:15 crc kubenswrapper[4867]: I0214 04:11:15.178882 4867 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 14 04:11:15 crc kubenswrapper[4867]: I0214 04:11:15.621604 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" event={"ID":"079535bb-0b3b-4373-bdc5-6dbf0d926179","Type":"ContainerStarted","Data":"f30dc88fc10bb24a59a40d3befe525549578a5e026d09551fa9145de8fdb8f0f"} Feb 14 04:11:16 crc kubenswrapper[4867]: I0214 04:11:16.996743 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:16 crc kubenswrapper[4867]: I0214 04:11:16.996756 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:16 crc kubenswrapper[4867]: I0214 04:11:16.996743 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:16 crc kubenswrapper[4867]: I0214 04:11:16.996872 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:16 crc kubenswrapper[4867]: E0214 04:11:16.996990 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:16 crc kubenswrapper[4867]: E0214 04:11:16.997039 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:16 crc kubenswrapper[4867]: E0214 04:11:16.997102 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:16 crc kubenswrapper[4867]: E0214 04:11:16.997203 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:18 crc kubenswrapper[4867]: I0214 04:11:18.996205 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:18 crc kubenswrapper[4867]: I0214 04:11:18.996259 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:18 crc kubenswrapper[4867]: I0214 04:11:18.998804 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:18 crc kubenswrapper[4867]: E0214 04:11:18.998798 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:18 crc kubenswrapper[4867]: I0214 04:11:18.998848 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:18 crc kubenswrapper[4867]: E0214 04:11:18.998986 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:18 crc kubenswrapper[4867]: E0214 04:11:18.999105 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:18 crc kubenswrapper[4867]: E0214 04:11:18.999205 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:19 crc kubenswrapper[4867]: I0214 04:11:19.998353 4867 scope.go:117] "RemoveContainer" containerID="97e1fa8b3d99d969cac9ac1d4bdd1161353186d3cf50512e692adeee0f21778a" Feb 14 04:11:19 crc kubenswrapper[4867]: E0214 04:11:19.998503 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" Feb 14 04:11:20 crc kubenswrapper[4867]: I0214 04:11:20.997161 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:20 crc kubenswrapper[4867]: I0214 04:11:20.997213 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:20 crc kubenswrapper[4867]: I0214 04:11:20.997287 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:20 crc kubenswrapper[4867]: E0214 04:11:20.997291 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:20 crc kubenswrapper[4867]: E0214 04:11:20.997367 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:20 crc kubenswrapper[4867]: E0214 04:11:20.997430 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:20 crc kubenswrapper[4867]: I0214 04:11:20.997548 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:20 crc kubenswrapper[4867]: E0214 04:11:20.997593 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:21 crc kubenswrapper[4867]: I0214 04:11:21.418783 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:11:21 crc kubenswrapper[4867]: I0214 04:11:21.420227 4867 scope.go:117] "RemoveContainer" containerID="97e1fa8b3d99d969cac9ac1d4bdd1161353186d3cf50512e692adeee0f21778a" Feb 14 04:11:21 crc kubenswrapper[4867]: E0214 04:11:21.420486 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-6nndn_openshift-ovn-kubernetes(34391a30-5865-46e9-af5f-705cc3b11fba)\"" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" Feb 14 04:11:22 crc kubenswrapper[4867]: I0214 04:11:22.996819 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:22 crc kubenswrapper[4867]: I0214 04:11:22.996860 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:22 crc kubenswrapper[4867]: E0214 04:11:22.996960 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:22 crc kubenswrapper[4867]: E0214 04:11:22.997114 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:22 crc kubenswrapper[4867]: I0214 04:11:22.997587 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:22 crc kubenswrapper[4867]: E0214 04:11:22.997664 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:22 crc kubenswrapper[4867]: I0214 04:11:22.997694 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:22 crc kubenswrapper[4867]: E0214 04:11:22.997765 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:24 crc kubenswrapper[4867]: I0214 04:11:24.996387 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:24 crc kubenswrapper[4867]: I0214 04:11:24.996685 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:24 crc kubenswrapper[4867]: I0214 04:11:24.996711 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:24 crc kubenswrapper[4867]: I0214 04:11:24.996822 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:24 crc kubenswrapper[4867]: E0214 04:11:24.997976 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:24 crc kubenswrapper[4867]: E0214 04:11:24.998018 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:24 crc kubenswrapper[4867]: E0214 04:11:24.998234 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:24 crc kubenswrapper[4867]: E0214 04:11:24.998808 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:25 crc kubenswrapper[4867]: I0214 04:11:25.655391 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fl729_fb77d03e-6ead-48b5-a96a-db4cbd540192/kube-multus/1.log" Feb 14 04:11:25 crc kubenswrapper[4867]: I0214 04:11:25.655980 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fl729_fb77d03e-6ead-48b5-a96a-db4cbd540192/kube-multus/0.log" Feb 14 04:11:25 crc kubenswrapper[4867]: I0214 04:11:25.656045 4867 generic.go:334] "Generic (PLEG): container finished" podID="fb77d03e-6ead-48b5-a96a-db4cbd540192" containerID="2556cf2433d1b1241d711139b8c66aabe3f12046f37c0f19b972b8306ff7917b" exitCode=1 Feb 14 04:11:25 crc kubenswrapper[4867]: I0214 04:11:25.656087 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fl729" event={"ID":"fb77d03e-6ead-48b5-a96a-db4cbd540192","Type":"ContainerDied","Data":"2556cf2433d1b1241d711139b8c66aabe3f12046f37c0f19b972b8306ff7917b"} Feb 14 04:11:25 crc kubenswrapper[4867]: I0214 04:11:25.656129 4867 scope.go:117] "RemoveContainer" containerID="6f23c7e00abcb489852a771f1534532f8a6c3acdd810e4432dd155a72558bcc7" Feb 14 04:11:25 crc kubenswrapper[4867]: I0214 04:11:25.656530 4867 scope.go:117] "RemoveContainer" containerID="2556cf2433d1b1241d711139b8c66aabe3f12046f37c0f19b972b8306ff7917b" Feb 14 04:11:25 crc kubenswrapper[4867]: E0214 04:11:25.656677 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-fl729_openshift-multus(fb77d03e-6ead-48b5-a96a-db4cbd540192)\"" pod="openshift-multus/multus-fl729" podUID="fb77d03e-6ead-48b5-a96a-db4cbd540192" Feb 14 04:11:25 crc kubenswrapper[4867]: I0214 04:11:25.675606 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-865l7" podStartSLOduration=95.675587657 podStartE2EDuration="1m35.675587657s" podCreationTimestamp="2026-02-14 04:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:15.635649959 +0000 UTC m=+107.716587293" watchObservedRunningTime="2026-02-14 04:11:25.675587657 +0000 UTC m=+117.756524981" Feb 14 04:11:26 crc kubenswrapper[4867]: I0214 04:11:26.659061 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fl729_fb77d03e-6ead-48b5-a96a-db4cbd540192/kube-multus/1.log" Feb 14 04:11:26 crc kubenswrapper[4867]: I0214 04:11:26.996486 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:26 crc kubenswrapper[4867]: I0214 04:11:26.996538 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:26 crc kubenswrapper[4867]: E0214 04:11:26.997060 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:26 crc kubenswrapper[4867]: I0214 04:11:26.996668 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:26 crc kubenswrapper[4867]: E0214 04:11:26.997328 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:26 crc kubenswrapper[4867]: I0214 04:11:26.996581 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:26 crc kubenswrapper[4867]: E0214 04:11:26.997594 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:26 crc kubenswrapper[4867]: E0214 04:11:26.997060 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:28 crc kubenswrapper[4867]: I0214 04:11:28.996418 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:28 crc kubenswrapper[4867]: I0214 04:11:28.996489 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:28 crc kubenswrapper[4867]: I0214 04:11:28.996568 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:28 crc kubenswrapper[4867]: E0214 04:11:28.997987 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:28 crc kubenswrapper[4867]: I0214 04:11:28.998010 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:28 crc kubenswrapper[4867]: E0214 04:11:28.998472 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:28 crc kubenswrapper[4867]: E0214 04:11:28.998552 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:28 crc kubenswrapper[4867]: E0214 04:11:28.998684 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:29 crc kubenswrapper[4867]: E0214 04:11:29.012464 4867 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 14 04:11:29 crc kubenswrapper[4867]: E0214 04:11:29.089740 4867 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:11:30 crc kubenswrapper[4867]: I0214 04:11:30.996736 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:30 crc kubenswrapper[4867]: I0214 04:11:30.996878 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:30 crc kubenswrapper[4867]: E0214 04:11:30.996887 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:30 crc kubenswrapper[4867]: I0214 04:11:30.997029 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:30 crc kubenswrapper[4867]: E0214 04:11:30.997161 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:30 crc kubenswrapper[4867]: E0214 04:11:30.997746 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:30 crc kubenswrapper[4867]: I0214 04:11:30.997774 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:30 crc kubenswrapper[4867]: E0214 04:11:30.997970 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:32 crc kubenswrapper[4867]: I0214 04:11:32.996291 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:32 crc kubenswrapper[4867]: I0214 04:11:32.996344 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:32 crc kubenswrapper[4867]: E0214 04:11:32.996430 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:32 crc kubenswrapper[4867]: I0214 04:11:32.996475 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:32 crc kubenswrapper[4867]: I0214 04:11:32.996309 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:32 crc kubenswrapper[4867]: E0214 04:11:32.996592 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:32 crc kubenswrapper[4867]: E0214 04:11:32.996657 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:32 crc kubenswrapper[4867]: E0214 04:11:32.996706 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:34 crc kubenswrapper[4867]: E0214 04:11:34.091428 4867 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:11:34 crc kubenswrapper[4867]: I0214 04:11:34.997706 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:34 crc kubenswrapper[4867]: I0214 04:11:34.997787 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:34 crc kubenswrapper[4867]: E0214 04:11:34.997836 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:34 crc kubenswrapper[4867]: I0214 04:11:34.997868 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:34 crc kubenswrapper[4867]: I0214 04:11:34.997706 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:34 crc kubenswrapper[4867]: E0214 04:11:34.998293 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:34 crc kubenswrapper[4867]: E0214 04:11:34.998428 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:34 crc kubenswrapper[4867]: I0214 04:11:34.998605 4867 scope.go:117] "RemoveContainer" containerID="97e1fa8b3d99d969cac9ac1d4bdd1161353186d3cf50512e692adeee0f21778a" Feb 14 04:11:34 crc kubenswrapper[4867]: E0214 04:11:34.998630 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:35 crc kubenswrapper[4867]: I0214 04:11:35.687788 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovnkube-controller/3.log" Feb 14 04:11:35 crc kubenswrapper[4867]: I0214 04:11:35.690491 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerStarted","Data":"e1b94247074b50625f56bc042c6a881f72145192ce803fa834d64741635d9444"} Feb 14 04:11:35 crc kubenswrapper[4867]: I0214 04:11:35.690953 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:11:35 crc kubenswrapper[4867]: I0214 04:11:35.912422 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podStartSLOduration=105.91239968 podStartE2EDuration="1m45.91239968s" podCreationTimestamp="2026-02-14 04:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:35.716594533 +0000 UTC m=+127.797531847" watchObservedRunningTime="2026-02-14 04:11:35.91239968 +0000 UTC m=+127.993336994" Feb 14 04:11:35 crc kubenswrapper[4867]: I0214 04:11:35.913602 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-4b6k5"] Feb 14 04:11:35 crc kubenswrapper[4867]: I0214 04:11:35.913725 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:35 crc kubenswrapper[4867]: E0214 04:11:35.913837 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:36 crc kubenswrapper[4867]: I0214 04:11:36.996681 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:36 crc kubenswrapper[4867]: I0214 04:11:36.996778 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:36 crc kubenswrapper[4867]: I0214 04:11:36.996721 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:36 crc kubenswrapper[4867]: E0214 04:11:36.996949 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:36 crc kubenswrapper[4867]: E0214 04:11:36.997080 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:36 crc kubenswrapper[4867]: E0214 04:11:36.997196 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:37 crc kubenswrapper[4867]: I0214 04:11:37.996272 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:37 crc kubenswrapper[4867]: E0214 04:11:37.996598 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:37 crc kubenswrapper[4867]: I0214 04:11:37.996631 4867 scope.go:117] "RemoveContainer" containerID="2556cf2433d1b1241d711139b8c66aabe3f12046f37c0f19b972b8306ff7917b" Feb 14 04:11:38 crc kubenswrapper[4867]: I0214 04:11:38.710235 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fl729_fb77d03e-6ead-48b5-a96a-db4cbd540192/kube-multus/1.log" Feb 14 04:11:38 crc kubenswrapper[4867]: I0214 04:11:38.710996 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fl729" event={"ID":"fb77d03e-6ead-48b5-a96a-db4cbd540192","Type":"ContainerStarted","Data":"b07a230a65d345e7f64ecb41b905a120a6174dc5229f73c67b086608b27b5a72"} Feb 14 04:11:38 crc kubenswrapper[4867]: I0214 04:11:38.996449 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:39 crc kubenswrapper[4867]: E0214 04:11:39.002923 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:39 crc kubenswrapper[4867]: I0214 04:11:39.003101 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:39 crc kubenswrapper[4867]: I0214 04:11:39.003145 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:39 crc kubenswrapper[4867]: E0214 04:11:39.003304 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:39 crc kubenswrapper[4867]: E0214 04:11:39.003417 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:39 crc kubenswrapper[4867]: E0214 04:11:39.092637 4867 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:11:39 crc kubenswrapper[4867]: I0214 04:11:39.997184 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:39 crc kubenswrapper[4867]: E0214 04:11:39.997421 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:40 crc kubenswrapper[4867]: I0214 04:11:40.996930 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:40 crc kubenswrapper[4867]: E0214 04:11:40.997062 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:40 crc kubenswrapper[4867]: I0214 04:11:40.996929 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:40 crc kubenswrapper[4867]: E0214 04:11:40.997140 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:40 crc kubenswrapper[4867]: I0214 04:11:40.997182 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:40 crc kubenswrapper[4867]: E0214 04:11:40.997393 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:41 crc kubenswrapper[4867]: I0214 04:11:41.996593 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:41 crc kubenswrapper[4867]: E0214 04:11:41.996797 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:42 crc kubenswrapper[4867]: I0214 04:11:42.996783 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:42 crc kubenswrapper[4867]: I0214 04:11:42.996839 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:42 crc kubenswrapper[4867]: I0214 04:11:42.996795 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:42 crc kubenswrapper[4867]: E0214 04:11:42.996930 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 14 04:11:42 crc kubenswrapper[4867]: E0214 04:11:42.997079 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 14 04:11:42 crc kubenswrapper[4867]: E0214 04:11:42.997219 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 14 04:11:43 crc kubenswrapper[4867]: I0214 04:11:43.996759 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:43 crc kubenswrapper[4867]: E0214 04:11:43.996898 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4b6k5" podUID="7206174b-645b-4924-8345-d1d4b1a5ec39" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.624665 4867 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.656235 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pctg8"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.656865 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.656910 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8qkg2"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.657550 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.657656 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-699tj"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.658113 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.658665 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.659241 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.659701 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.664264 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.664335 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.664729 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.664739 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.664931 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.665048 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.665225 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.665378 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.665571 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.665732 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pmlgc"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.665984 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.666076 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.666092 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.666176 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pmlgc" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.666431 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.667318 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.667711 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.667852 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-htv2n"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.668017 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.668175 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.670601 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.677786 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.677854 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.677937 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.678076 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.678087 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.678184 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.678214 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.678404 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.678406 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.678633 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.678717 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.678937 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.679114 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.679123 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.679303 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.681224 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.681858 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.682339 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.682853 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.683810 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-x9sjv"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.684382 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-x9sjv" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.684911 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.685360 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.687675 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.688056 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.689327 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.690064 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.690453 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-ccg6j"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.691061 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.692988 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.693442 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.694824 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.695110 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.695165 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.695297 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.695320 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.695527 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.695558 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.698118 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.699055 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.699161 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.699462 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.699638 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.699701 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.699641 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.700352 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.700592 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.700960 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.711691 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.712086 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.712300 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.712473 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.712594 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.712725 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.712821 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.713218 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5rxcg"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.713788 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.714322 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pctg8"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.715586 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.715636 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.715704 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.715772 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.715840 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.716070 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.719573 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.719746 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.719885 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.719916 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.720892 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.721585 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.721746 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.721811 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.721893 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.722049 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.722095 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.722128 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.722185 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.722269 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.722295 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.722375 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.722451 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.722552 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.722660 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.722681 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.722764 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.722900 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.723852 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.723929 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.724154 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.724732 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.724743 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.725423 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-c4c52"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.725876 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.726267 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.726308 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-l6gq7"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.727038 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-l6gq7" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.728174 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-t8bst"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.728402 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.728798 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-t8bst" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.729026 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.729424 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.729822 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.730014 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.730679 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.730765 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.731183 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c65kr"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.731439 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.731834 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-485km"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.732483 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.734347 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.734802 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.735022 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.735028 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.735413 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.736006 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.740459 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.751574 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-p69vd"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.752301 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.752794 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.753048 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.759907 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-699tj"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.763386 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9kgzh"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.767452 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.767700 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.773088 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-9kgzh" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.774791 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.775019 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.775517 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.779562 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.780406 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.781116 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.781140 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.782268 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.782861 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.783515 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.783652 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5k4wz"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.784216 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5k4wz" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.787952 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.789240 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mkw9h"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.789619 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.789742 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.792070 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.793175 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-qlkzp"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.794014 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.794207 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.794421 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rxprp"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.795025 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rxprp" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.797605 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.798365 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f47sx"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.798836 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-gc8sl"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.799107 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.799410 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gc8sl" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.799608 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f47sx" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.799969 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pmlgc"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.801233 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.803467 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-htv2n"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.820828 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.824981 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-t8bst"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.825912 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.827405 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.839305 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.842231 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5k4wz"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.843690 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.846559 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.846618 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8qkg2"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.858031 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-ccg6j"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.858886 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d58c6e7c-e0bc-4833-ab34-348c03f75da7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.858937 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/553b1e39-c2d5-459d-a7fd-058f936804cb-config\") pod \"authentication-operator-69f744f599-p69vd\" (UID: \"553b1e39-c2d5-459d-a7fd-058f936804cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.858962 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ccd97956-aef1-45cf-9475-02928c866124-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-szcmx\" (UID: \"ccd97956-aef1-45cf-9475-02928c866124\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.858983 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl4jb\" (UniqueName: \"kubernetes.io/projected/d1f6fd76-f362-495f-969d-a644f072552f-kube-api-access-kl4jb\") pod \"openshift-config-operator-7777fb866f-l8d7w\" (UID: \"d1f6fd76-f362-495f-969d-a644f072552f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.858997 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d58c6e7c-e0bc-4833-ab34-348c03f75da7-serving-cert\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859021 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/894233bb-65ed-4cdd-ac61-7a8bd8f66140-etcd-client\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859035 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1261994f-a993-4ffc-851a-dfce5bcc10b1-config\") pod \"machine-approver-56656f9798-5kv6p\" (UID: \"1261994f-a993-4ffc-851a-dfce5bcc10b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859057 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77ddb26b-22ee-4a97-81ab-7e82c611ebd5-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-wgfm8\" (UID: \"77ddb26b-22ee-4a97-81ab-7e82c611ebd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859073 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvv7t\" (UniqueName: \"kubernetes.io/projected/bb63883f-65f5-4107-877a-ff786d6c00f9-kube-api-access-zvv7t\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859096 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqcq7\" (UniqueName: \"kubernetes.io/projected/07dd9173-fdfe-4edb-821b-37c94116b53e-kube-api-access-bqcq7\") pod \"controller-manager-879f6c89f-pctg8\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859111 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t74ck\" (UniqueName: \"kubernetes.io/projected/a9bcb9a2-1128-4c6b-80b1-47afd1a46511-kube-api-access-t74ck\") pod \"multus-admission-controller-857f4d67dd-l6gq7\" (UID: \"a9bcb9a2-1128-4c6b-80b1-47afd1a46511\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l6gq7" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859127 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l2hg\" (UniqueName: \"kubernetes.io/projected/6d8ea50d-6822-425a-8eac-6311c8537eb7-kube-api-access-5l2hg\") pod \"openshift-controller-manager-operator-756b6f6bc6-886ct\" (UID: \"6d8ea50d-6822-425a-8eac-6311c8537eb7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859143 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1f6fd76-f362-495f-969d-a644f072552f-serving-cert\") pod \"openshift-config-operator-7777fb866f-l8d7w\" (UID: \"d1f6fd76-f362-495f-969d-a644f072552f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859155 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d58c6e7c-e0bc-4833-ab34-348c03f75da7-etcd-client\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859184 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfv86\" (UniqueName: \"kubernetes.io/projected/72546cbc-3499-4110-b0e4-58beab7cc8a5-kube-api-access-kfv86\") pod \"downloads-7954f5f757-x9sjv\" (UID: \"72546cbc-3499-4110-b0e4-58beab7cc8a5\") " pod="openshift-console/downloads-7954f5f757-x9sjv" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859204 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w44ng\" (UniqueName: \"kubernetes.io/projected/0ccfed17-f056-4bbe-8ec3-cdd31f37be63-kube-api-access-w44ng\") pod \"dns-operator-744455d44c-t8bst\" (UID: \"0ccfed17-f056-4bbe-8ec3-cdd31f37be63\") " pod="openshift-dns-operator/dns-operator-744455d44c-t8bst" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859224 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1261994f-a993-4ffc-851a-dfce5bcc10b1-auth-proxy-config\") pod \"machine-approver-56656f9798-5kv6p\" (UID: \"1261994f-a993-4ffc-851a-dfce5bcc10b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859239 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/894233bb-65ed-4cdd-ac61-7a8bd8f66140-image-import-ca\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859269 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1815da32-cba4-41f4-80ca-45a750c7e93f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ff8rv\" (UID: \"1815da32-cba4-41f4-80ca-45a750c7e93f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859283 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-service-ca\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859298 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-config\") pod \"controller-manager-879f6c89f-pctg8\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859322 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/894233bb-65ed-4cdd-ac61-7a8bd8f66140-config\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859341 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-client-ca\") pod \"controller-manager-879f6c89f-pctg8\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859361 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c4dfcc-144e-40cd-bed2-dc28c210a130-config\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859375 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xpw2\" (UniqueName: \"kubernetes.io/projected/22c4dfcc-144e-40cd-bed2-dc28c210a130-kube-api-access-5xpw2\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859394 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d46c3923-f64c-42de-b84c-98bc872f5de6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nmdjh\" (UID: \"d46c3923-f64c-42de-b84c-98bc872f5de6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859427 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07dd9173-fdfe-4edb-821b-37c94116b53e-serving-cert\") pod \"controller-manager-879f6c89f-pctg8\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859442 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ccd97956-aef1-45cf-9475-02928c866124-proxy-tls\") pod \"machine-config-controller-84d6567774-szcmx\" (UID: \"ccd97956-aef1-45cf-9475-02928c866124\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859464 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx7lz\" (UniqueName: \"kubernetes.io/projected/acdb1323-fec8-46fa-9f36-9b0f7f74cca4-kube-api-access-fx7lz\") pod \"cluster-samples-operator-665b6dd947-pmlgc\" (UID: \"acdb1323-fec8-46fa-9f36-9b0f7f74cca4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pmlgc" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859480 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d8ea50d-6822-425a-8eac-6311c8537eb7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-886ct\" (UID: \"6d8ea50d-6822-425a-8eac-6311c8537eb7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859494 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/894233bb-65ed-4cdd-ac61-7a8bd8f66140-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859524 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1261994f-a993-4ffc-851a-dfce5bcc10b1-machine-approver-tls\") pod \"machine-approver-56656f9798-5kv6p\" (UID: \"1261994f-a993-4ffc-851a-dfce5bcc10b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859540 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d58c6e7c-e0bc-4833-ab34-348c03f75da7-audit-policies\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859557 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1815da32-cba4-41f4-80ca-45a750c7e93f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ff8rv\" (UID: \"1815da32-cba4-41f4-80ca-45a750c7e93f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859571 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlknt\" (UniqueName: \"kubernetes.io/projected/d58c6e7c-e0bc-4833-ab34-348c03f75da7-kube-api-access-jlknt\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859592 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22c4dfcc-144e-40cd-bed2-dc28c210a130-serving-cert\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859605 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/894233bb-65ed-4cdd-ac61-7a8bd8f66140-serving-cert\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859624 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d46c3923-f64c-42de-b84c-98bc872f5de6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nmdjh\" (UID: \"d46c3923-f64c-42de-b84c-98bc872f5de6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859660 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859794 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/894233bb-65ed-4cdd-ac61-7a8bd8f66140-etcd-serving-ca\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.859851 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/835c6d49-e42e-444a-a276-fb9f064fdbda-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8bmcr\" (UID: \"835c6d49-e42e-444a-a276-fb9f064fdbda\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.861859 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.861982 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/835c6d49-e42e-444a-a276-fb9f064fdbda-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8bmcr\" (UID: \"835c6d49-e42e-444a-a276-fb9f064fdbda\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862026 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/acdb1323-fec8-46fa-9f36-9b0f7f74cca4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-pmlgc\" (UID: \"acdb1323-fec8-46fa-9f36-9b0f7f74cca4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pmlgc" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862056 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5tsn\" (UniqueName: \"kubernetes.io/projected/553b1e39-c2d5-459d-a7fd-058f936804cb-kube-api-access-b5tsn\") pod \"authentication-operator-69f744f599-p69vd\" (UID: \"553b1e39-c2d5-459d-a7fd-058f936804cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862081 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77ddb26b-22ee-4a97-81ab-7e82c611ebd5-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-wgfm8\" (UID: \"77ddb26b-22ee-4a97-81ab-7e82c611ebd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862151 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ln4g\" (UniqueName: \"kubernetes.io/projected/1261994f-a993-4ffc-851a-dfce5bcc10b1-kube-api-access-7ln4g\") pod \"machine-approver-56656f9798-5kv6p\" (UID: \"1261994f-a993-4ffc-851a-dfce5bcc10b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862181 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/835c6d49-e42e-444a-a276-fb9f064fdbda-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8bmcr\" (UID: \"835c6d49-e42e-444a-a276-fb9f064fdbda\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862224 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5rxcg"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862231 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-oauth-serving-cert\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862318 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/894233bb-65ed-4cdd-ac61-7a8bd8f66140-encryption-config\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862356 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/553b1e39-c2d5-459d-a7fd-058f936804cb-serving-cert\") pod \"authentication-operator-69f744f599-p69vd\" (UID: \"553b1e39-c2d5-459d-a7fd-058f936804cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862390 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-trusted-ca-bundle\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862467 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxzvd\" (UniqueName: \"kubernetes.io/projected/6a8f75ff-3558-4d7b-8adb-722a732d0633-kube-api-access-mxzvd\") pod \"machine-config-operator-74547568cd-wcdc2\" (UID: \"6a8f75ff-3558-4d7b-8adb-722a732d0633\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862525 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0ccfed17-f056-4bbe-8ec3-cdd31f37be63-metrics-tls\") pod \"dns-operator-744455d44c-t8bst\" (UID: \"0ccfed17-f056-4bbe-8ec3-cdd31f37be63\") " pod="openshift-dns-operator/dns-operator-744455d44c-t8bst" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862551 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d58c6e7c-e0bc-4833-ab34-348c03f75da7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862576 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/22c4dfcc-144e-40cd-bed2-dc28c210a130-etcd-service-ca\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862597 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpkk9\" (UniqueName: \"kubernetes.io/projected/ccd97956-aef1-45cf-9475-02928c866124-kube-api-access-gpkk9\") pod \"machine-config-controller-84d6567774-szcmx\" (UID: \"ccd97956-aef1-45cf-9475-02928c866124\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862621 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb63883f-65f5-4107-877a-ff786d6c00f9-console-serving-cert\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862638 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6a8f75ff-3558-4d7b-8adb-722a732d0633-auth-proxy-config\") pod \"machine-config-operator-74547568cd-wcdc2\" (UID: \"6a8f75ff-3558-4d7b-8adb-722a732d0633\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862657 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/894233bb-65ed-4cdd-ac61-7a8bd8f66140-audit\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862683 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d8ea50d-6822-425a-8eac-6311c8537eb7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-886ct\" (UID: \"6d8ea50d-6822-425a-8eac-6311c8537eb7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862703 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/553b1e39-c2d5-459d-a7fd-058f936804cb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-p69vd\" (UID: \"553b1e39-c2d5-459d-a7fd-058f936804cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.862999 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/22c4dfcc-144e-40cd-bed2-dc28c210a130-etcd-client\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863028 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/894233bb-65ed-4cdd-ac61-7a8bd8f66140-node-pullsecrets\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863050 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d58c6e7c-e0bc-4833-ab34-348c03f75da7-encryption-config\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863069 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d58c6e7c-e0bc-4833-ab34-348c03f75da7-audit-dir\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863278 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-pctg8\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863296 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a9bcb9a2-1128-4c6b-80b1-47afd1a46511-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-l6gq7\" (UID: \"a9bcb9a2-1128-4c6b-80b1-47afd1a46511\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l6gq7" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863315 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rcf4\" (UniqueName: \"kubernetes.io/projected/835c6d49-e42e-444a-a276-fb9f064fdbda-kube-api-access-5rcf4\") pod \"cluster-image-registry-operator-dc59b4c8b-8bmcr\" (UID: \"835c6d49-e42e-444a-a276-fb9f064fdbda\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863334 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6a8f75ff-3558-4d7b-8adb-722a732d0633-images\") pod \"machine-config-operator-74547568cd-wcdc2\" (UID: \"6a8f75ff-3558-4d7b-8adb-722a732d0633\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863359 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbcmm\" (UniqueName: \"kubernetes.io/projected/77ddb26b-22ee-4a97-81ab-7e82c611ebd5-kube-api-access-hbcmm\") pod \"kube-storage-version-migrator-operator-b67b599dd-wgfm8\" (UID: \"77ddb26b-22ee-4a97-81ab-7e82c611ebd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863379 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb63883f-65f5-4107-877a-ff786d6c00f9-console-oauth-config\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863399 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/22c4dfcc-144e-40cd-bed2-dc28c210a130-etcd-ca\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863431 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/894233bb-65ed-4cdd-ac61-7a8bd8f66140-audit-dir\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863471 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp6r7\" (UniqueName: \"kubernetes.io/projected/d46c3923-f64c-42de-b84c-98bc872f5de6-kube-api-access-hp6r7\") pod \"openshift-apiserver-operator-796bbdcf4f-nmdjh\" (UID: \"d46c3923-f64c-42de-b84c-98bc872f5de6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863523 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1815da32-cba4-41f4-80ca-45a750c7e93f-config\") pod \"kube-apiserver-operator-766d6c64bb-ff8rv\" (UID: \"1815da32-cba4-41f4-80ca-45a750c7e93f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863544 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/553b1e39-c2d5-459d-a7fd-058f936804cb-service-ca-bundle\") pod \"authentication-operator-69f744f599-p69vd\" (UID: \"553b1e39-c2d5-459d-a7fd-058f936804cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863571 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pfz2\" (UniqueName: \"kubernetes.io/projected/894233bb-65ed-4cdd-ac61-7a8bd8f66140-kube-api-access-6pfz2\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863599 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d1f6fd76-f362-495f-969d-a644f072552f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-l8d7w\" (UID: \"d1f6fd76-f362-495f-969d-a644f072552f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863616 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-console-config\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.863631 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6a8f75ff-3558-4d7b-8adb-722a732d0633-proxy-tls\") pod \"machine-config-operator-74547568cd-wcdc2\" (UID: \"6a8f75ff-3558-4d7b-8adb-722a732d0633\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.866149 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-485km"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.868408 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.878404 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.882805 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.884156 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-x9sjv"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.885451 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-sz8l8"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.904423 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-l6gq7"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.904575 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-sz8l8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.907620 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.909785 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.911676 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.916569 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.919666 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-p69vd"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.921558 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.922493 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-c4c52"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.923834 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.926261 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9kgzh"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.927169 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.929787 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.930559 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.931365 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c65kr"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.932378 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rxprp"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.933313 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f47sx"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.934319 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.936425 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mkw9h"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.937668 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.938694 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-gc8sl"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.939720 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pzj5s"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.940968 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pzj5s"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.941063 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.944939 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.958186 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.965945 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-8ftf5"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.966707 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8ftf5" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.967467 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1f6fd76-f362-495f-969d-a644f072552f-serving-cert\") pod \"openshift-config-operator-7777fb866f-l8d7w\" (UID: \"d1f6fd76-f362-495f-969d-a644f072552f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.967609 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfv86\" (UniqueName: \"kubernetes.io/projected/72546cbc-3499-4110-b0e4-58beab7cc8a5-kube-api-access-kfv86\") pod \"downloads-7954f5f757-x9sjv\" (UID: \"72546cbc-3499-4110-b0e4-58beab7cc8a5\") " pod="openshift-console/downloads-7954f5f757-x9sjv" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.967635 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w44ng\" (UniqueName: \"kubernetes.io/projected/0ccfed17-f056-4bbe-8ec3-cdd31f37be63-kube-api-access-w44ng\") pod \"dns-operator-744455d44c-t8bst\" (UID: \"0ccfed17-f056-4bbe-8ec3-cdd31f37be63\") " pod="openshift-dns-operator/dns-operator-744455d44c-t8bst" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.967651 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1261994f-a993-4ffc-851a-dfce5bcc10b1-auth-proxy-config\") pod \"machine-approver-56656f9798-5kv6p\" (UID: \"1261994f-a993-4ffc-851a-dfce5bcc10b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.967762 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d58c6e7c-e0bc-4833-ab34-348c03f75da7-etcd-client\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.967787 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1815da32-cba4-41f4-80ca-45a750c7e93f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ff8rv\" (UID: \"1815da32-cba4-41f4-80ca-45a750c7e93f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.967807 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-service-ca\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.967901 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-config\") pod \"controller-manager-879f6c89f-pctg8\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.967958 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/894233bb-65ed-4cdd-ac61-7a8bd8f66140-config\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.968008 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/894233bb-65ed-4cdd-ac61-7a8bd8f66140-image-import-ca\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.968035 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-client-ca\") pod \"controller-manager-879f6c89f-pctg8\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.968240 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c4dfcc-144e-40cd-bed2-dc28c210a130-config\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.968267 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xpw2\" (UniqueName: \"kubernetes.io/projected/22c4dfcc-144e-40cd-bed2-dc28c210a130-kube-api-access-5xpw2\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.968285 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d46c3923-f64c-42de-b84c-98bc872f5de6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nmdjh\" (UID: \"d46c3923-f64c-42de-b84c-98bc872f5de6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.968447 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07dd9173-fdfe-4edb-821b-37c94116b53e-serving-cert\") pod \"controller-manager-879f6c89f-pctg8\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.968497 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ccd97956-aef1-45cf-9475-02928c866124-proxy-tls\") pod \"machine-config-controller-84d6567774-szcmx\" (UID: \"ccd97956-aef1-45cf-9475-02928c866124\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.968736 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx7lz\" (UniqueName: \"kubernetes.io/projected/acdb1323-fec8-46fa-9f36-9b0f7f74cca4-kube-api-access-fx7lz\") pod \"cluster-samples-operator-665b6dd947-pmlgc\" (UID: \"acdb1323-fec8-46fa-9f36-9b0f7f74cca4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pmlgc" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.968759 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d8ea50d-6822-425a-8eac-6311c8537eb7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-886ct\" (UID: \"6d8ea50d-6822-425a-8eac-6311c8537eb7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.968903 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/894233bb-65ed-4cdd-ac61-7a8bd8f66140-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.968925 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d58c6e7c-e0bc-4833-ab34-348c03f75da7-audit-policies\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969067 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1261994f-a993-4ffc-851a-dfce5bcc10b1-machine-approver-tls\") pod \"machine-approver-56656f9798-5kv6p\" (UID: \"1261994f-a993-4ffc-851a-dfce5bcc10b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969088 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1815da32-cba4-41f4-80ca-45a750c7e93f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ff8rv\" (UID: \"1815da32-cba4-41f4-80ca-45a750c7e93f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969281 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlknt\" (UniqueName: \"kubernetes.io/projected/d58c6e7c-e0bc-4833-ab34-348c03f75da7-kube-api-access-jlknt\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969324 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22c4dfcc-144e-40cd-bed2-dc28c210a130-serving-cert\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969347 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/894233bb-65ed-4cdd-ac61-7a8bd8f66140-serving-cert\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969363 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d46c3923-f64c-42de-b84c-98bc872f5de6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nmdjh\" (UID: \"d46c3923-f64c-42de-b84c-98bc872f5de6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969386 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/894233bb-65ed-4cdd-ac61-7a8bd8f66140-etcd-serving-ca\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969407 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/835c6d49-e42e-444a-a276-fb9f064fdbda-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8bmcr\" (UID: \"835c6d49-e42e-444a-a276-fb9f064fdbda\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969428 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/835c6d49-e42e-444a-a276-fb9f064fdbda-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8bmcr\" (UID: \"835c6d49-e42e-444a-a276-fb9f064fdbda\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969447 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/acdb1323-fec8-46fa-9f36-9b0f7f74cca4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-pmlgc\" (UID: \"acdb1323-fec8-46fa-9f36-9b0f7f74cca4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pmlgc" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969464 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5tsn\" (UniqueName: \"kubernetes.io/projected/553b1e39-c2d5-459d-a7fd-058f936804cb-kube-api-access-b5tsn\") pod \"authentication-operator-69f744f599-p69vd\" (UID: \"553b1e39-c2d5-459d-a7fd-058f936804cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969482 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ln4g\" (UniqueName: \"kubernetes.io/projected/1261994f-a993-4ffc-851a-dfce5bcc10b1-kube-api-access-7ln4g\") pod \"machine-approver-56656f9798-5kv6p\" (UID: \"1261994f-a993-4ffc-851a-dfce5bcc10b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969522 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/835c6d49-e42e-444a-a276-fb9f064fdbda-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8bmcr\" (UID: \"835c6d49-e42e-444a-a276-fb9f064fdbda\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969548 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77ddb26b-22ee-4a97-81ab-7e82c611ebd5-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-wgfm8\" (UID: \"77ddb26b-22ee-4a97-81ab-7e82c611ebd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969571 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-oauth-serving-cert\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969592 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/894233bb-65ed-4cdd-ac61-7a8bd8f66140-encryption-config\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969616 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/553b1e39-c2d5-459d-a7fd-058f936804cb-serving-cert\") pod \"authentication-operator-69f744f599-p69vd\" (UID: \"553b1e39-c2d5-459d-a7fd-058f936804cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969642 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-trusted-ca-bundle\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969672 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/894233bb-65ed-4cdd-ac61-7a8bd8f66140-config\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969679 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxzvd\" (UniqueName: \"kubernetes.io/projected/6a8f75ff-3558-4d7b-8adb-722a732d0633-kube-api-access-mxzvd\") pod \"machine-config-operator-74547568cd-wcdc2\" (UID: \"6a8f75ff-3558-4d7b-8adb-722a732d0633\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969740 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0ccfed17-f056-4bbe-8ec3-cdd31f37be63-metrics-tls\") pod \"dns-operator-744455d44c-t8bst\" (UID: \"0ccfed17-f056-4bbe-8ec3-cdd31f37be63\") " pod="openshift-dns-operator/dns-operator-744455d44c-t8bst" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969768 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d58c6e7c-e0bc-4833-ab34-348c03f75da7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969796 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/22c4dfcc-144e-40cd-bed2-dc28c210a130-etcd-service-ca\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969820 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb63883f-65f5-4107-877a-ff786d6c00f9-console-serving-cert\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969844 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6a8f75ff-3558-4d7b-8adb-722a732d0633-auth-proxy-config\") pod \"machine-config-operator-74547568cd-wcdc2\" (UID: \"6a8f75ff-3558-4d7b-8adb-722a732d0633\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969867 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/894233bb-65ed-4cdd-ac61-7a8bd8f66140-audit\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969893 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpkk9\" (UniqueName: \"kubernetes.io/projected/ccd97956-aef1-45cf-9475-02928c866124-kube-api-access-gpkk9\") pod \"machine-config-controller-84d6567774-szcmx\" (UID: \"ccd97956-aef1-45cf-9475-02928c866124\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969919 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/553b1e39-c2d5-459d-a7fd-058f936804cb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-p69vd\" (UID: \"553b1e39-c2d5-459d-a7fd-058f936804cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969942 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d8ea50d-6822-425a-8eac-6311c8537eb7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-886ct\" (UID: \"6d8ea50d-6822-425a-8eac-6311c8537eb7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969964 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/22c4dfcc-144e-40cd-bed2-dc28c210a130-etcd-client\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969987 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/894233bb-65ed-4cdd-ac61-7a8bd8f66140-node-pullsecrets\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970007 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d58c6e7c-e0bc-4833-ab34-348c03f75da7-encryption-config\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970029 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d58c6e7c-e0bc-4833-ab34-348c03f75da7-audit-dir\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970052 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-pctg8\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970076 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a9bcb9a2-1128-4c6b-80b1-47afd1a46511-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-l6gq7\" (UID: \"a9bcb9a2-1128-4c6b-80b1-47afd1a46511\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l6gq7" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970138 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rcf4\" (UniqueName: \"kubernetes.io/projected/835c6d49-e42e-444a-a276-fb9f064fdbda-kube-api-access-5rcf4\") pod \"cluster-image-registry-operator-dc59b4c8b-8bmcr\" (UID: \"835c6d49-e42e-444a-a276-fb9f064fdbda\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970167 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbcmm\" (UniqueName: \"kubernetes.io/projected/77ddb26b-22ee-4a97-81ab-7e82c611ebd5-kube-api-access-hbcmm\") pod \"kube-storage-version-migrator-operator-b67b599dd-wgfm8\" (UID: \"77ddb26b-22ee-4a97-81ab-7e82c611ebd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970191 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb63883f-65f5-4107-877a-ff786d6c00f9-console-oauth-config\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970246 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/22c4dfcc-144e-40cd-bed2-dc28c210a130-etcd-ca\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970273 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6a8f75ff-3558-4d7b-8adb-722a732d0633-images\") pod \"machine-config-operator-74547568cd-wcdc2\" (UID: \"6a8f75ff-3558-4d7b-8adb-722a732d0633\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970298 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp6r7\" (UniqueName: \"kubernetes.io/projected/d46c3923-f64c-42de-b84c-98bc872f5de6-kube-api-access-hp6r7\") pod \"openshift-apiserver-operator-796bbdcf4f-nmdjh\" (UID: \"d46c3923-f64c-42de-b84c-98bc872f5de6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970323 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/894233bb-65ed-4cdd-ac61-7a8bd8f66140-audit-dir\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970349 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1815da32-cba4-41f4-80ca-45a750c7e93f-config\") pod \"kube-apiserver-operator-766d6c64bb-ff8rv\" (UID: \"1815da32-cba4-41f4-80ca-45a750c7e93f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970359 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-oauth-serving-cert\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970371 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/553b1e39-c2d5-459d-a7fd-058f936804cb-service-ca-bundle\") pod \"authentication-operator-69f744f599-p69vd\" (UID: \"553b1e39-c2d5-459d-a7fd-058f936804cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970398 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pfz2\" (UniqueName: \"kubernetes.io/projected/894233bb-65ed-4cdd-ac61-7a8bd8f66140-kube-api-access-6pfz2\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970423 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d1f6fd76-f362-495f-969d-a644f072552f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-l8d7w\" (UID: \"d1f6fd76-f362-495f-969d-a644f072552f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970444 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-console-config\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970468 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6a8f75ff-3558-4d7b-8adb-722a732d0633-proxy-tls\") pod \"machine-config-operator-74547568cd-wcdc2\" (UID: \"6a8f75ff-3558-4d7b-8adb-722a732d0633\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970490 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d58c6e7c-e0bc-4833-ab34-348c03f75da7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970534 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/553b1e39-c2d5-459d-a7fd-058f936804cb-config\") pod \"authentication-operator-69f744f599-p69vd\" (UID: \"553b1e39-c2d5-459d-a7fd-058f936804cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970560 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ccd97956-aef1-45cf-9475-02928c866124-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-szcmx\" (UID: \"ccd97956-aef1-45cf-9475-02928c866124\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970584 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl4jb\" (UniqueName: \"kubernetes.io/projected/d1f6fd76-f362-495f-969d-a644f072552f-kube-api-access-kl4jb\") pod \"openshift-config-operator-7777fb866f-l8d7w\" (UID: \"d1f6fd76-f362-495f-969d-a644f072552f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.969675 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-client-ca\") pod \"controller-manager-879f6c89f-pctg8\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970634 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d58c6e7c-e0bc-4833-ab34-348c03f75da7-serving-cert\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970667 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/894233bb-65ed-4cdd-ac61-7a8bd8f66140-etcd-client\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970700 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1261994f-a993-4ffc-851a-dfce5bcc10b1-config\") pod \"machine-approver-56656f9798-5kv6p\" (UID: \"1261994f-a993-4ffc-851a-dfce5bcc10b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970714 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d58c6e7c-e0bc-4833-ab34-348c03f75da7-audit-policies\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970727 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvv7t\" (UniqueName: \"kubernetes.io/projected/bb63883f-65f5-4107-877a-ff786d6c00f9-kube-api-access-zvv7t\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970747 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqcq7\" (UniqueName: \"kubernetes.io/projected/07dd9173-fdfe-4edb-821b-37c94116b53e-kube-api-access-bqcq7\") pod \"controller-manager-879f6c89f-pctg8\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970771 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t74ck\" (UniqueName: \"kubernetes.io/projected/a9bcb9a2-1128-4c6b-80b1-47afd1a46511-kube-api-access-t74ck\") pod \"multus-admission-controller-857f4d67dd-l6gq7\" (UID: \"a9bcb9a2-1128-4c6b-80b1-47afd1a46511\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l6gq7" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970803 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l2hg\" (UniqueName: \"kubernetes.io/projected/6d8ea50d-6822-425a-8eac-6311c8537eb7-kube-api-access-5l2hg\") pod \"openshift-controller-manager-operator-756b6f6bc6-886ct\" (UID: \"6d8ea50d-6822-425a-8eac-6311c8537eb7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970827 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77ddb26b-22ee-4a97-81ab-7e82c611ebd5-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-wgfm8\" (UID: \"77ddb26b-22ee-4a97-81ab-7e82c611ebd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.971421 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d46c3923-f64c-42de-b84c-98bc872f5de6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nmdjh\" (UID: \"d46c3923-f64c-42de-b84c-98bc872f5de6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.971755 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c4dfcc-144e-40cd-bed2-dc28c210a130-config\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.971905 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1261994f-a993-4ffc-851a-dfce5bcc10b1-auth-proxy-config\") pod \"machine-approver-56656f9798-5kv6p\" (UID: \"1261994f-a993-4ffc-851a-dfce5bcc10b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.971909 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/894233bb-65ed-4cdd-ac61-7a8bd8f66140-node-pullsecrets\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.972033 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-trusted-ca-bundle\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.972281 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/894233bb-65ed-4cdd-ac61-7a8bd8f66140-audit-dir\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.972643 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d1f6fd76-f362-495f-969d-a644f072552f-available-featuregates\") pod \"openshift-config-operator-7777fb866f-l8d7w\" (UID: \"d1f6fd76-f362-495f-969d-a644f072552f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.972882 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-config\") pod \"controller-manager-879f6c89f-pctg8\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.973153 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/894233bb-65ed-4cdd-ac61-7a8bd8f66140-image-import-ca\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.973261 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/22c4dfcc-144e-40cd-bed2-dc28c210a130-etcd-service-ca\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.973794 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1815da32-cba4-41f4-80ca-45a750c7e93f-config\") pod \"kube-apiserver-operator-766d6c64bb-ff8rv\" (UID: \"1815da32-cba4-41f4-80ca-45a750c7e93f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.973823 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6a8f75ff-3558-4d7b-8adb-722a732d0633-auth-proxy-config\") pod \"machine-config-operator-74547568cd-wcdc2\" (UID: \"6a8f75ff-3558-4d7b-8adb-722a732d0633\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.974262 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1815da32-cba4-41f4-80ca-45a750c7e93f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-ff8rv\" (UID: \"1815da32-cba4-41f4-80ca-45a750c7e93f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.974605 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/894233bb-65ed-4cdd-ac61-7a8bd8f66140-audit\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.974831 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d8ea50d-6822-425a-8eac-6311c8537eb7-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-886ct\" (UID: \"6d8ea50d-6822-425a-8eac-6311c8537eb7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.975197 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/894233bb-65ed-4cdd-ac61-7a8bd8f66140-etcd-serving-ca\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.970443 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-service-ca\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.975299 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d58c6e7c-e0bc-4833-ab34-348c03f75da7-audit-dir\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.976067 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/22c4dfcc-144e-40cd-bed2-dc28c210a130-etcd-ca\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.976112 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1261994f-a993-4ffc-851a-dfce5bcc10b1-config\") pod \"machine-approver-56656f9798-5kv6p\" (UID: \"1261994f-a993-4ffc-851a-dfce5bcc10b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.976445 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-console-config\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.976522 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6a8f75ff-3558-4d7b-8adb-722a732d0633-images\") pod \"machine-config-operator-74547568cd-wcdc2\" (UID: \"6a8f75ff-3558-4d7b-8adb-722a732d0633\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.977645 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1261994f-a993-4ffc-851a-dfce5bcc10b1-machine-approver-tls\") pod \"machine-approver-56656f9798-5kv6p\" (UID: \"1261994f-a993-4ffc-851a-dfce5bcc10b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.977820 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d58c6e7c-e0bc-4833-ab34-348c03f75da7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.977957 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/894233bb-65ed-4cdd-ac61-7a8bd8f66140-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.979840 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/835c6d49-e42e-444a-a276-fb9f064fdbda-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8bmcr\" (UID: \"835c6d49-e42e-444a-a276-fb9f064fdbda\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.979950 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/894233bb-65ed-4cdd-ac61-7a8bd8f66140-serving-cert\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.980791 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07dd9173-fdfe-4edb-821b-37c94116b53e-serving-cert\") pod \"controller-manager-879f6c89f-pctg8\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.980788 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ccd97956-aef1-45cf-9475-02928c866124-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-szcmx\" (UID: \"ccd97956-aef1-45cf-9475-02928c866124\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.980975 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/acdb1323-fec8-46fa-9f36-9b0f7f74cca4-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-pmlgc\" (UID: \"acdb1323-fec8-46fa-9f36-9b0f7f74cca4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pmlgc" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.981030 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d58c6e7c-e0bc-4833-ab34-348c03f75da7-etcd-client\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.981602 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/22c4dfcc-144e-40cd-bed2-dc28c210a130-etcd-client\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.985114 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d46c3923-f64c-42de-b84c-98bc872f5de6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nmdjh\" (UID: \"d46c3923-f64c-42de-b84c-98bc872f5de6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.990122 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0ccfed17-f056-4bbe-8ec3-cdd31f37be63-metrics-tls\") pod \"dns-operator-744455d44c-t8bst\" (UID: \"0ccfed17-f056-4bbe-8ec3-cdd31f37be63\") " pod="openshift-dns-operator/dns-operator-744455d44c-t8bst" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.990147 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6a8f75ff-3558-4d7b-8adb-722a732d0633-proxy-tls\") pod \"machine-config-operator-74547568cd-wcdc2\" (UID: \"6a8f75ff-3558-4d7b-8adb-722a732d0633\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.990194 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d8ea50d-6822-425a-8eac-6311c8537eb7-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-886ct\" (UID: \"6d8ea50d-6822-425a-8eac-6311c8537eb7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.990285 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/894233bb-65ed-4cdd-ac61-7a8bd8f66140-etcd-client\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.990455 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb63883f-65f5-4107-877a-ff786d6c00f9-console-serving-cert\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.990551 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22c4dfcc-144e-40cd-bed2-dc28c210a130-serving-cert\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.990562 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a9bcb9a2-1128-4c6b-80b1-47afd1a46511-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-l6gq7\" (UID: \"a9bcb9a2-1128-4c6b-80b1-47afd1a46511\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l6gq7" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.990632 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb63883f-65f5-4107-877a-ff786d6c00f9-console-oauth-config\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.990667 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1f6fd76-f362-495f-969d-a644f072552f-serving-cert\") pod \"openshift-config-operator-7777fb866f-l8d7w\" (UID: \"d1f6fd76-f362-495f-969d-a644f072552f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.990774 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/894233bb-65ed-4cdd-ac61-7a8bd8f66140-encryption-config\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.990853 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d58c6e7c-e0bc-4833-ab34-348c03f75da7-encryption-config\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.990959 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ccd97956-aef1-45cf-9475-02928c866124-proxy-tls\") pod \"machine-config-controller-84d6567774-szcmx\" (UID: \"ccd97956-aef1-45cf-9475-02928c866124\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.991010 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d58c6e7c-e0bc-4833-ab34-348c03f75da7-serving-cert\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.991748 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d58c6e7c-e0bc-4833-ab34-348c03f75da7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.992999 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8ftf5"] Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.993498 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-pctg8\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.995082 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/835c6d49-e42e-444a-a276-fb9f064fdbda-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8bmcr\" (UID: \"835c6d49-e42e-444a-a276-fb9f064fdbda\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.996224 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.996270 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.996402 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:44 crc kubenswrapper[4867]: I0214 04:11:44.999554 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:44.999717 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.018686 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.039453 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.058928 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.078814 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.099040 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.119896 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.139802 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.159196 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.179239 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.198470 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.234425 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.238847 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.258483 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.284686 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.299232 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.319236 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.339643 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.359033 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.378636 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.385499 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77ddb26b-22ee-4a97-81ab-7e82c611ebd5-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-wgfm8\" (UID: \"77ddb26b-22ee-4a97-81ab-7e82c611ebd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.399748 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.418605 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.421592 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77ddb26b-22ee-4a97-81ab-7e82c611ebd5-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-wgfm8\" (UID: \"77ddb26b-22ee-4a97-81ab-7e82c611ebd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.438310 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.458770 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.464215 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/553b1e39-c2d5-459d-a7fd-058f936804cb-serving-cert\") pod \"authentication-operator-69f744f599-p69vd\" (UID: \"553b1e39-c2d5-459d-a7fd-058f936804cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.479760 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.482707 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/553b1e39-c2d5-459d-a7fd-058f936804cb-config\") pod \"authentication-operator-69f744f599-p69vd\" (UID: \"553b1e39-c2d5-459d-a7fd-058f936804cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.504923 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.513929 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/553b1e39-c2d5-459d-a7fd-058f936804cb-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-p69vd\" (UID: \"553b1e39-c2d5-459d-a7fd-058f936804cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.519180 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.524683 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/553b1e39-c2d5-459d-a7fd-058f936804cb-service-ca-bundle\") pod \"authentication-operator-69f744f599-p69vd\" (UID: \"553b1e39-c2d5-459d-a7fd-058f936804cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.539329 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.578895 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.600002 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.618875 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.638436 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.660387 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.679962 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.699627 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.719249 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.739787 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.759239 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.779898 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.797252 4867 request.go:700] Waited for 1.015297606s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-controller-manager-operator-config&limit=500&resourceVersion=0 Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.799537 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.819386 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.839876 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.859013 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.879494 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.899223 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.918642 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.938795 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.958554 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.979281 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.996965 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:11:45 crc kubenswrapper[4867]: I0214 04:11:45.998568 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.019752 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.039413 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.059678 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.087246 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.098840 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.119683 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.138481 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.158284 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.178905 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.199188 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.219041 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.238801 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.259616 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.280172 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.299868 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.320204 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.339734 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.358764 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.380018 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.399282 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.418644 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.439883 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.460256 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.480619 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.500316 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.520945 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.559908 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.579744 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.599941 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.619547 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.639241 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.659661 4867 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.678840 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.699617 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.720276 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.739021 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.774770 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfv86\" (UniqueName: \"kubernetes.io/projected/72546cbc-3499-4110-b0e4-58beab7cc8a5-kube-api-access-kfv86\") pod \"downloads-7954f5f757-x9sjv\" (UID: \"72546cbc-3499-4110-b0e4-58beab7cc8a5\") " pod="openshift-console/downloads-7954f5f757-x9sjv" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.792536 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w44ng\" (UniqueName: \"kubernetes.io/projected/0ccfed17-f056-4bbe-8ec3-cdd31f37be63-kube-api-access-w44ng\") pod \"dns-operator-744455d44c-t8bst\" (UID: \"0ccfed17-f056-4bbe-8ec3-cdd31f37be63\") " pod="openshift-dns-operator/dns-operator-744455d44c-t8bst" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.797354 4867 request.go:700] Waited for 1.827950885s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/serviceaccounts/kube-apiserver-operator/token Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.812829 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1815da32-cba4-41f4-80ca-45a750c7e93f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-ff8rv\" (UID: \"1815da32-cba4-41f4-80ca-45a750c7e93f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.833111 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxzvd\" (UniqueName: \"kubernetes.io/projected/6a8f75ff-3558-4d7b-8adb-722a732d0633-kube-api-access-mxzvd\") pod \"machine-config-operator-74547568cd-wcdc2\" (UID: \"6a8f75ff-3558-4d7b-8adb-722a732d0633\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.852759 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/835c6d49-e42e-444a-a276-fb9f064fdbda-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8bmcr\" (UID: \"835c6d49-e42e-444a-a276-fb9f064fdbda\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.883620 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlknt\" (UniqueName: \"kubernetes.io/projected/d58c6e7c-e0bc-4833-ab34-348c03f75da7-kube-api-access-jlknt\") pod \"apiserver-7bbb656c7d-jsc7b\" (UID: \"d58c6e7c-e0bc-4833-ab34-348c03f75da7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.893284 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx7lz\" (UniqueName: \"kubernetes.io/projected/acdb1323-fec8-46fa-9f36-9b0f7f74cca4-kube-api-access-fx7lz\") pod \"cluster-samples-operator-665b6dd947-pmlgc\" (UID: \"acdb1323-fec8-46fa-9f36-9b0f7f74cca4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pmlgc" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.895971 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.907671 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.914520 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rcf4\" (UniqueName: \"kubernetes.io/projected/835c6d49-e42e-444a-a276-fb9f064fdbda-kube-api-access-5rcf4\") pod \"cluster-image-registry-operator-dc59b4c8b-8bmcr\" (UID: \"835c6d49-e42e-444a-a276-fb9f064fdbda\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.918216 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-x9sjv" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.934485 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbcmm\" (UniqueName: \"kubernetes.io/projected/77ddb26b-22ee-4a97-81ab-7e82c611ebd5-kube-api-access-hbcmm\") pod \"kube-storage-version-migrator-operator-b67b599dd-wgfm8\" (UID: \"77ddb26b-22ee-4a97-81ab-7e82c611ebd5\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.956008 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ln4g\" (UniqueName: \"kubernetes.io/projected/1261994f-a993-4ffc-851a-dfce5bcc10b1-kube-api-access-7ln4g\") pod \"machine-approver-56656f9798-5kv6p\" (UID: \"1261994f-a993-4ffc-851a-dfce5bcc10b1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.973239 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5tsn\" (UniqueName: \"kubernetes.io/projected/553b1e39-c2d5-459d-a7fd-058f936804cb-kube-api-access-b5tsn\") pod \"authentication-operator-69f744f599-p69vd\" (UID: \"553b1e39-c2d5-459d-a7fd-058f936804cb\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.984220 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" Feb 14 04:11:46 crc kubenswrapper[4867]: I0214 04:11:46.997014 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpkk9\" (UniqueName: \"kubernetes.io/projected/ccd97956-aef1-45cf-9475-02928c866124-kube-api-access-gpkk9\") pod \"machine-config-controller-84d6567774-szcmx\" (UID: \"ccd97956-aef1-45cf-9475-02928c866124\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.017090 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xpw2\" (UniqueName: \"kubernetes.io/projected/22c4dfcc-144e-40cd-bed2-dc28c210a130-kube-api-access-5xpw2\") pod \"etcd-operator-b45778765-ccg6j\" (UID: \"22c4dfcc-144e-40cd-bed2-dc28c210a130\") " pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.024944 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-t8bst" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.033573 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pfz2\" (UniqueName: \"kubernetes.io/projected/894233bb-65ed-4cdd-ac61-7a8bd8f66140-kube-api-access-6pfz2\") pod \"apiserver-76f77b778f-8qkg2\" (UID: \"894233bb-65ed-4cdd-ac61-7a8bd8f66140\") " pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.056637 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.057865 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqcq7\" (UniqueName: \"kubernetes.io/projected/07dd9173-fdfe-4edb-821b-37c94116b53e-kube-api-access-bqcq7\") pod \"controller-manager-879f6c89f-pctg8\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.068225 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.074759 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvv7t\" (UniqueName: \"kubernetes.io/projected/bb63883f-65f5-4107-877a-ff786d6c00f9-kube-api-access-zvv7t\") pod \"console-f9d7485db-c4c52\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.078139 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.087411 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.101868 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l2hg\" (UniqueName: \"kubernetes.io/projected/6d8ea50d-6822-425a-8eac-6311c8537eb7-kube-api-access-5l2hg\") pod \"openshift-controller-manager-operator-756b6f6bc6-886ct\" (UID: \"6d8ea50d-6822-425a-8eac-6311c8537eb7\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.115321 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.115772 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl4jb\" (UniqueName: \"kubernetes.io/projected/d1f6fd76-f362-495f-969d-a644f072552f-kube-api-access-kl4jb\") pod \"openshift-config-operator-7777fb866f-l8d7w\" (UID: \"d1f6fd76-f362-495f-969d-a644f072552f\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.121213 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.123037 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.135410 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t74ck\" (UniqueName: \"kubernetes.io/projected/a9bcb9a2-1128-4c6b-80b1-47afd1a46511-kube-api-access-t74ck\") pod \"multus-admission-controller-857f4d67dd-l6gq7\" (UID: \"a9bcb9a2-1128-4c6b-80b1-47afd1a46511\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-l6gq7" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.156819 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pmlgc" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.160610 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.167637 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.168463 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp6r7\" (UniqueName: \"kubernetes.io/projected/d46c3923-f64c-42de-b84c-98bc872f5de6-kube-api-access-hp6r7\") pod \"openshift-apiserver-operator-796bbdcf4f-nmdjh\" (UID: \"d46c3923-f64c-42de-b84c-98bc872f5de6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.169141 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv"] Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.181146 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-x9sjv"] Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.181345 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.206073 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.210370 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b"] Feb 14 04:11:47 crc kubenswrapper[4867]: W0214 04:11:47.213519 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1815da32_cba4_41f4_80ca_45a750c7e93f.slice/crio-813e40a2e1867731aba1c9c1cac2258dab16eefb257f8f867e54e1c39dbd1222 WatchSource:0}: Error finding container 813e40a2e1867731aba1c9c1cac2258dab16eefb257f8f867e54e1c39dbd1222: Status 404 returned error can't find the container with id 813e40a2e1867731aba1c9c1cac2258dab16eefb257f8f867e54e1c39dbd1222 Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.219056 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.224717 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr"] Feb 14 04:11:47 crc kubenswrapper[4867]: W0214 04:11:47.230876 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd58c6e7c_e0bc_4833_ab34_348c03f75da7.slice/crio-c5c4776deb3975945db7e0cf31af409b0ccecd9b88acf8d033c946f648493142 WatchSource:0}: Error finding container c5c4776deb3975945db7e0cf31af409b0ccecd9b88acf8d033c946f648493142: Status 404 returned error can't find the container with id c5c4776deb3975945db7e0cf31af409b0ccecd9b88acf8d033c946f648493142 Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.234777 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh" Feb 14 04:11:47 crc kubenswrapper[4867]: W0214 04:11:47.242347 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72546cbc_3499_4110_b0e4_58beab7cc8a5.slice/crio-ec4665aac003c1b4e7cba85ff048914da8febde16b0034c9afb5b3fb2a36029a WatchSource:0}: Error finding container ec4665aac003c1b4e7cba85ff048914da8febde16b0034c9afb5b3fb2a36029a: Status 404 returned error can't find the container with id ec4665aac003c1b4e7cba85ff048914da8febde16b0034c9afb5b3fb2a36029a Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.247882 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.263692 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.267799 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 14 04:11:47 crc kubenswrapper[4867]: W0214 04:11:47.284640 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod835c6d49_e42e_444a_a276_fb9f064fdbda.slice/crio-144c6c8b1c76f545a725545d137202c6089bbe081caa00b695421ad1383b769d WatchSource:0}: Error finding container 144c6c8b1c76f545a725545d137202c6089bbe081caa00b695421ad1383b769d: Status 404 returned error can't find the container with id 144c6c8b1c76f545a725545d137202c6089bbe081caa00b695421ad1383b769d Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.285186 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.309620 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.310644 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.310693 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.310715 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c029599e-5014-4874-917f-076635849451-registry-certificates\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.310732 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.310787 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-bound-sa-token\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.310805 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mznjl\" (UniqueName: \"kubernetes.io/projected/1fd832b4-de40-4266-93fb-3682eeb9dd3e-kube-api-access-mznjl\") pod \"ingress-operator-5b745b69d9-485km\" (UID: \"1fd832b4-de40-4266-93fb-3682eeb9dd3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.310847 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z26vn\" (UniqueName: \"kubernetes.io/projected/dc723269-8ee6-4236-9eaa-169a00d76442-kube-api-access-z26vn\") pod \"console-operator-58897d9998-htv2n\" (UID: \"dc723269-8ee6-4236-9eaa-169a00d76442\") " pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.310872 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-registry-tls\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.310916 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2kd6\" (UniqueName: \"kubernetes.io/projected/14efaf39-985f-45ea-ab79-0b8b2044c7f7-kube-api-access-q2kd6\") pod \"route-controller-manager-6576b87f9c-29p6h\" (UID: \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.310931 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8437deca-adf5-4648-9abe-2c1c6376d07b-images\") pod \"machine-api-operator-5694c8668f-699tj\" (UID: \"8437deca-adf5-4648-9abe-2c1c6376d07b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.310948 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-audit-policies\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.310972 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmbh6\" (UniqueName: \"kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-kube-api-access-bmbh6\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.310995 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311017 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14efaf39-985f-45ea-ab79-0b8b2044c7f7-client-ca\") pod \"route-controller-manager-6576b87f9c-29p6h\" (UID: \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311036 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ad7b333-6328-41ea-a81d-bce9790b185a-audit-dir\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311061 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311083 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c029599e-5014-4874-917f-076635849451-installation-pull-secrets\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311109 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311132 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1fd832b4-de40-4266-93fb-3682eeb9dd3e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-485km\" (UID: \"1fd832b4-de40-4266-93fb-3682eeb9dd3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311154 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311177 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1fd832b4-de40-4266-93fb-3682eeb9dd3e-metrics-tls\") pod \"ingress-operator-5b745b69d9-485km\" (UID: \"1fd832b4-de40-4266-93fb-3682eeb9dd3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311200 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311223 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311253 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c029599e-5014-4874-917f-076635849451-ca-trust-extracted\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311274 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1fd832b4-de40-4266-93fb-3682eeb9dd3e-trusted-ca\") pod \"ingress-operator-5b745b69d9-485km\" (UID: \"1fd832b4-de40-4266-93fb-3682eeb9dd3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311306 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8437deca-adf5-4648-9abe-2c1c6376d07b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-699tj\" (UID: \"8437deca-adf5-4648-9abe-2c1c6376d07b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311324 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311346 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkhp4\" (UniqueName: \"kubernetes.io/projected/8437deca-adf5-4648-9abe-2c1c6376d07b-kube-api-access-wkhp4\") pod \"machine-api-operator-5694c8668f-699tj\" (UID: \"8437deca-adf5-4648-9abe-2c1c6376d07b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311364 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311385 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc723269-8ee6-4236-9eaa-169a00d76442-trusted-ca\") pod \"console-operator-58897d9998-htv2n\" (UID: \"dc723269-8ee6-4236-9eaa-169a00d76442\") " pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311404 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14efaf39-985f-45ea-ab79-0b8b2044c7f7-config\") pod \"route-controller-manager-6576b87f9c-29p6h\" (UID: \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311426 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14efaf39-985f-45ea-ab79-0b8b2044c7f7-serving-cert\") pod \"route-controller-manager-6576b87f9c-29p6h\" (UID: \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311450 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c029599e-5014-4874-917f-076635849451-trusted-ca\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311464 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311480 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc723269-8ee6-4236-9eaa-169a00d76442-serving-cert\") pod \"console-operator-58897d9998-htv2n\" (UID: \"dc723269-8ee6-4236-9eaa-169a00d76442\") " pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311494 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf64k\" (UniqueName: \"kubernetes.io/projected/0ad7b333-6328-41ea-a81d-bce9790b185a-kube-api-access-tf64k\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311527 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8437deca-adf5-4648-9abe-2c1c6376d07b-config\") pod \"machine-api-operator-5694c8668f-699tj\" (UID: \"8437deca-adf5-4648-9abe-2c1c6376d07b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.311542 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc723269-8ee6-4236-9eaa-169a00d76442-config\") pod \"console-operator-58897d9998-htv2n\" (UID: \"dc723269-8ee6-4236-9eaa-169a00d76442\") " pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 04:11:47 crc kubenswrapper[4867]: E0214 04:11:47.311828 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:47.81181592 +0000 UTC m=+139.892753234 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.312483 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx"] Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.341367 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-l6gq7" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.412959 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413136 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1fd832b4-de40-4266-93fb-3682eeb9dd3e-metrics-tls\") pod \"ingress-operator-5b745b69d9-485km\" (UID: \"1fd832b4-de40-4266-93fb-3682eeb9dd3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413165 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc7cr\" (UniqueName: \"kubernetes.io/projected/d3658855-0c06-490f-9bcc-33de7069178e-kube-api-access-zc7cr\") pod \"ingress-canary-8ftf5\" (UID: \"d3658855-0c06-490f-9bcc-33de7069178e\") " pod="openshift-ingress-canary/ingress-canary-8ftf5" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413184 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0d05475f-b787-49dc-8a0b-c98e47f40a3b-certs\") pod \"machine-config-server-sz8l8\" (UID: \"0d05475f-b787-49dc-8a0b-c98e47f40a3b\") " pod="openshift-machine-config-operator/machine-config-server-sz8l8" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413200 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/89db71f1-1a8b-4c57-9a3d-eb725060aee9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-f47sx\" (UID: \"89db71f1-1a8b-4c57-9a3d-eb725060aee9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f47sx" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413215 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs22v\" (UniqueName: \"kubernetes.io/projected/9a16b0f1-4ef6-457a-a766-a0cc2181501f-kube-api-access-gs22v\") pod \"migrator-59844c95c7-5k4wz\" (UID: \"9a16b0f1-4ef6-457a-a766-a0cc2181501f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5k4wz" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413229 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02d4609f-f699-4ac2-bc41-752b879681ba-config\") pod \"service-ca-operator-777779d784-rxprp\" (UID: \"02d4609f-f699-4ac2-bc41-752b879681ba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rxprp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413247 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fx4z\" (UniqueName: \"kubernetes.io/projected/46664b60-c0df-4869-9304-cec4de385a86-kube-api-access-7fx4z\") pod \"olm-operator-6b444d44fb-tcss9\" (UID: \"46664b60-c0df-4869-9304-cec4de385a86\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413272 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7cedc5a6-929b-43ca-a8b0-6dca555ca455-registration-dir\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413286 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd4dbaf5-45ee-4171-b6b9-7deba44931ff-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t6c97\" (UID: \"dd4dbaf5-45ee-4171-b6b9-7deba44931ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413301 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413332 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7cedc5a6-929b-43ca-a8b0-6dca555ca455-plugins-dir\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413347 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413363 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stznr\" (UniqueName: \"kubernetes.io/projected/a0c7654d-1553-4b68-8af4-253f77d7c657-kube-api-access-stznr\") pod \"package-server-manager-789f6589d5-rv8cb\" (UID: \"a0c7654d-1553-4b68-8af4-253f77d7c657\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413378 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzxjb\" (UniqueName: \"kubernetes.io/projected/7cedc5a6-929b-43ca-a8b0-6dca555ca455-kube-api-access-hzxjb\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413393 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c029599e-5014-4874-917f-076635849451-ca-trust-extracted\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413408 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0c7654d-1553-4b68-8af4-253f77d7c657-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rv8cb\" (UID: \"a0c7654d-1553-4b68-8af4-253f77d7c657\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413441 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7cedc5a6-929b-43ca-a8b0-6dca555ca455-mountpoint-dir\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413457 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1fd832b4-de40-4266-93fb-3682eeb9dd3e-trusted-ca\") pod \"ingress-operator-5b745b69d9-485km\" (UID: \"1fd832b4-de40-4266-93fb-3682eeb9dd3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413489 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd4dbaf5-45ee-4171-b6b9-7deba44931ff-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t6c97\" (UID: \"dd4dbaf5-45ee-4171-b6b9-7deba44931ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413566 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b71d414-e6bf-4f51-a808-1938c1edf207-service-ca-bundle\") pod \"router-default-5444994796-qlkzp\" (UID: \"4b71d414-e6bf-4f51-a808-1938c1edf207\") " pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413592 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02d4609f-f699-4ac2-bc41-752b879681ba-serving-cert\") pod \"service-ca-operator-777779d784-rxprp\" (UID: \"02d4609f-f699-4ac2-bc41-752b879681ba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rxprp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413627 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8437deca-adf5-4648-9abe-2c1c6376d07b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-699tj\" (UID: \"8437deca-adf5-4648-9abe-2c1c6376d07b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413650 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413691 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkhp4\" (UniqueName: \"kubernetes.io/projected/8437deca-adf5-4648-9abe-2c1c6376d07b-kube-api-access-wkhp4\") pod \"machine-api-operator-5694c8668f-699tj\" (UID: \"8437deca-adf5-4648-9abe-2c1c6376d07b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413717 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413744 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7cedc5a6-929b-43ca-a8b0-6dca555ca455-socket-dir\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413761 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc723269-8ee6-4236-9eaa-169a00d76442-trusted-ca\") pod \"console-operator-58897d9998-htv2n\" (UID: \"dc723269-8ee6-4236-9eaa-169a00d76442\") " pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413775 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14efaf39-985f-45ea-ab79-0b8b2044c7f7-config\") pod \"route-controller-manager-6576b87f9c-29p6h\" (UID: \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413789 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkqlf\" (UniqueName: \"kubernetes.io/projected/02d4609f-f699-4ac2-bc41-752b879681ba-kube-api-access-bkqlf\") pod \"service-ca-operator-777779d784-rxprp\" (UID: \"02d4609f-f699-4ac2-bc41-752b879681ba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rxprp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413807 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-config-volume\") pod \"collect-profiles-29517360-jfvsd\" (UID: \"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413831 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14efaf39-985f-45ea-ab79-0b8b2044c7f7-serving-cert\") pod \"route-controller-manager-6576b87f9c-29p6h\" (UID: \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413846 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4b71d414-e6bf-4f51-a808-1938c1edf207-stats-auth\") pod \"router-default-5444994796-qlkzp\" (UID: \"4b71d414-e6bf-4f51-a808-1938c1edf207\") " pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413870 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/541a6523-92f6-477b-9d35-a3a0074f5de3-metrics-tls\") pod \"dns-default-gc8sl\" (UID: \"541a6523-92f6-477b-9d35-a3a0074f5de3\") " pod="openshift-dns/dns-default-gc8sl" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413886 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c029599e-5014-4874-917f-076635849451-trusted-ca\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413900 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413915 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec0d5c79-9e98-4f09-a336-9c284ba81d82-config\") pod \"kube-controller-manager-operator-78b949d7b-6tvm5\" (UID: \"ec0d5c79-9e98-4f09-a336-9c284ba81d82\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413931 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b1dba42c-e410-49fd-8c48-449fca5d65dc-srv-cert\") pod \"catalog-operator-68c6474976-dgp2v\" (UID: \"b1dba42c-e410-49fd-8c48-449fca5d65dc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413956 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc723269-8ee6-4236-9eaa-169a00d76442-serving-cert\") pod \"console-operator-58897d9998-htv2n\" (UID: \"dc723269-8ee6-4236-9eaa-169a00d76442\") " pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.413972 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt2g9\" (UniqueName: \"kubernetes.io/projected/1b196c26-84a1-408f-913b-eb50572102cf-kube-api-access-pt2g9\") pod \"packageserver-d55dfcdfc-s94ht\" (UID: \"1b196c26-84a1-408f-913b-eb50572102cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414003 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf64k\" (UniqueName: \"kubernetes.io/projected/0ad7b333-6328-41ea-a81d-bce9790b185a-kube-api-access-tf64k\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414019 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1b196c26-84a1-408f-913b-eb50572102cf-tmpfs\") pod \"packageserver-d55dfcdfc-s94ht\" (UID: \"1b196c26-84a1-408f-913b-eb50572102cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414033 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0d05475f-b787-49dc-8a0b-c98e47f40a3b-node-bootstrap-token\") pod \"machine-config-server-sz8l8\" (UID: \"0d05475f-b787-49dc-8a0b-c98e47f40a3b\") " pod="openshift-machine-config-operator/machine-config-server-sz8l8" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414048 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/541a6523-92f6-477b-9d35-a3a0074f5de3-config-volume\") pod \"dns-default-gc8sl\" (UID: \"541a6523-92f6-477b-9d35-a3a0074f5de3\") " pod="openshift-dns/dns-default-gc8sl" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414085 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8437deca-adf5-4648-9abe-2c1c6376d07b-config\") pod \"machine-api-operator-5694c8668f-699tj\" (UID: \"8437deca-adf5-4648-9abe-2c1c6376d07b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414099 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc723269-8ee6-4236-9eaa-169a00d76442-config\") pod \"console-operator-58897d9998-htv2n\" (UID: \"dc723269-8ee6-4236-9eaa-169a00d76442\") " pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414481 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mkw9h\" (UID: \"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414521 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec0d5c79-9e98-4f09-a336-9c284ba81d82-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6tvm5\" (UID: \"ec0d5c79-9e98-4f09-a336-9c284ba81d82\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414566 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkjjw\" (UniqueName: \"kubernetes.io/projected/0d05475f-b787-49dc-8a0b-c98e47f40a3b-kube-api-access-nkjjw\") pod \"machine-config-server-sz8l8\" (UID: \"0d05475f-b787-49dc-8a0b-c98e47f40a3b\") " pod="openshift-machine-config-operator/machine-config-server-sz8l8" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414583 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec0d5c79-9e98-4f09-a336-9c284ba81d82-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6tvm5\" (UID: \"ec0d5c79-9e98-4f09-a336-9c284ba81d82\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414632 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-secret-volume\") pod \"collect-profiles-29517360-jfvsd\" (UID: \"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414675 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414691 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7cedc5a6-929b-43ca-a8b0-6dca555ca455-csi-data-dir\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414707 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414733 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c029599e-5014-4874-917f-076635849451-registry-certificates\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414749 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414781 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-bound-sa-token\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414797 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mznjl\" (UniqueName: \"kubernetes.io/projected/1fd832b4-de40-4266-93fb-3682eeb9dd3e-kube-api-access-mznjl\") pod \"ingress-operator-5b745b69d9-485km\" (UID: \"1fd832b4-de40-4266-93fb-3682eeb9dd3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414812 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z26vn\" (UniqueName: \"kubernetes.io/projected/dc723269-8ee6-4236-9eaa-169a00d76442-kube-api-access-z26vn\") pod \"console-operator-58897d9998-htv2n\" (UID: \"dc723269-8ee6-4236-9eaa-169a00d76442\") " pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414827 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1b196c26-84a1-408f-913b-eb50572102cf-apiservice-cert\") pod \"packageserver-d55dfcdfc-s94ht\" (UID: \"1b196c26-84a1-408f-913b-eb50572102cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414841 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4b71d414-e6bf-4f51-a808-1938c1edf207-default-certificate\") pod \"router-default-5444994796-qlkzp\" (UID: \"4b71d414-e6bf-4f51-a808-1938c1edf207\") " pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414876 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjwtw\" (UniqueName: \"kubernetes.io/projected/d74f081b-fe53-4642-8340-a8e602c627f1-kube-api-access-kjwtw\") pod \"service-ca-9c57cc56f-9kgzh\" (UID: \"d74f081b-fe53-4642-8340-a8e602c627f1\") " pod="openshift-service-ca/service-ca-9c57cc56f-9kgzh" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414891 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whx59\" (UniqueName: \"kubernetes.io/projected/4b71d414-e6bf-4f51-a808-1938c1edf207-kube-api-access-whx59\") pod \"router-default-5444994796-qlkzp\" (UID: \"4b71d414-e6bf-4f51-a808-1938c1edf207\") " pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414908 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-registry-tls\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414925 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2kd6\" (UniqueName: \"kubernetes.io/projected/14efaf39-985f-45ea-ab79-0b8b2044c7f7-kube-api-access-q2kd6\") pod \"route-controller-manager-6576b87f9c-29p6h\" (UID: \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414940 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mkw9h\" (UID: \"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414958 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8437deca-adf5-4648-9abe-2c1c6376d07b-images\") pod \"machine-api-operator-5694c8668f-699tj\" (UID: \"8437deca-adf5-4648-9abe-2c1c6376d07b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414974 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-audit-policies\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.414989 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b1dba42c-e410-49fd-8c48-449fca5d65dc-profile-collector-cert\") pod \"catalog-operator-68c6474976-dgp2v\" (UID: \"b1dba42c-e410-49fd-8c48-449fca5d65dc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415024 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmbh6\" (UniqueName: \"kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-kube-api-access-bmbh6\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415039 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b71d414-e6bf-4f51-a808-1938c1edf207-metrics-certs\") pod \"router-default-5444994796-qlkzp\" (UID: \"4b71d414-e6bf-4f51-a808-1938c1edf207\") " pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415061 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14efaf39-985f-45ea-ab79-0b8b2044c7f7-client-ca\") pod \"route-controller-manager-6576b87f9c-29p6h\" (UID: \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415076 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr9gw\" (UniqueName: \"kubernetes.io/projected/89db71f1-1a8b-4c57-9a3d-eb725060aee9-kube-api-access-rr9gw\") pod \"control-plane-machine-set-operator-78cbb6b69f-f47sx\" (UID: \"89db71f1-1a8b-4c57-9a3d-eb725060aee9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f47sx" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415102 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ad7b333-6328-41ea-a81d-bce9790b185a-audit-dir\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415118 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zw4m\" (UniqueName: \"kubernetes.io/projected/b1dba42c-e410-49fd-8c48-449fca5d65dc-kube-api-access-4zw4m\") pod \"catalog-operator-68c6474976-dgp2v\" (UID: \"b1dba42c-e410-49fd-8c48-449fca5d65dc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415133 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/46664b60-c0df-4869-9304-cec4de385a86-srv-cert\") pod \"olm-operator-6b444d44fb-tcss9\" (UID: \"46664b60-c0df-4869-9304-cec4de385a86\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415159 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq2jw\" (UniqueName: \"kubernetes.io/projected/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-kube-api-access-gq2jw\") pod \"marketplace-operator-79b997595-mkw9h\" (UID: \"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415174 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6dn8\" (UniqueName: \"kubernetes.io/projected/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-kube-api-access-s6dn8\") pod \"collect-profiles-29517360-jfvsd\" (UID: \"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415202 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs9hv\" (UniqueName: \"kubernetes.io/projected/541a6523-92f6-477b-9d35-a3a0074f5de3-kube-api-access-cs9hv\") pod \"dns-default-gc8sl\" (UID: \"541a6523-92f6-477b-9d35-a3a0074f5de3\") " pod="openshift-dns/dns-default-gc8sl" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415228 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415281 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c029599e-5014-4874-917f-076635849451-installation-pull-secrets\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415297 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1b196c26-84a1-408f-913b-eb50572102cf-webhook-cert\") pod \"packageserver-d55dfcdfc-s94ht\" (UID: \"1b196c26-84a1-408f-913b-eb50572102cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415359 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/46664b60-c0df-4869-9304-cec4de385a86-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tcss9\" (UID: \"46664b60-c0df-4869-9304-cec4de385a86\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415383 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d74f081b-fe53-4642-8340-a8e602c627f1-signing-key\") pod \"service-ca-9c57cc56f-9kgzh\" (UID: \"d74f081b-fe53-4642-8340-a8e602c627f1\") " pod="openshift-service-ca/service-ca-9c57cc56f-9kgzh" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415398 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d74f081b-fe53-4642-8340-a8e602c627f1-signing-cabundle\") pod \"service-ca-9c57cc56f-9kgzh\" (UID: \"d74f081b-fe53-4642-8340-a8e602c627f1\") " pod="openshift-service-ca/service-ca-9c57cc56f-9kgzh" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415412 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d3658855-0c06-490f-9bcc-33de7069178e-cert\") pod \"ingress-canary-8ftf5\" (UID: \"d3658855-0c06-490f-9bcc-33de7069178e\") " pod="openshift-ingress-canary/ingress-canary-8ftf5" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.415427 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd4dbaf5-45ee-4171-b6b9-7deba44931ff-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t6c97\" (UID: \"dd4dbaf5-45ee-4171-b6b9-7deba44931ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.416787 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c029599e-5014-4874-917f-076635849451-trusted-ca\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.417222 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-audit-policies\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.417706 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8437deca-adf5-4648-9abe-2c1c6376d07b-config\") pod \"machine-api-operator-5694c8668f-699tj\" (UID: \"8437deca-adf5-4648-9abe-2c1c6376d07b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.418177 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc723269-8ee6-4236-9eaa-169a00d76442-config\") pod \"console-operator-58897d9998-htv2n\" (UID: \"dc723269-8ee6-4236-9eaa-169a00d76442\") " pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.418342 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.418381 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1fd832b4-de40-4266-93fb-3682eeb9dd3e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-485km\" (UID: \"1fd832b4-de40-4266-93fb-3682eeb9dd3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.418398 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.418646 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c029599e-5014-4874-917f-076635849451-ca-trust-extracted\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.418716 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dc723269-8ee6-4236-9eaa-169a00d76442-trusted-ca\") pod \"console-operator-58897d9998-htv2n\" (UID: \"dc723269-8ee6-4236-9eaa-169a00d76442\") " pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.425127 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.425772 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8437deca-adf5-4648-9abe-2c1c6376d07b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-699tj\" (UID: \"8437deca-adf5-4648-9abe-2c1c6376d07b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.425818 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc723269-8ee6-4236-9eaa-169a00d76442-serving-cert\") pod \"console-operator-58897d9998-htv2n\" (UID: \"dc723269-8ee6-4236-9eaa-169a00d76442\") " pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.426044 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.430805 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1fd832b4-de40-4266-93fb-3682eeb9dd3e-trusted-ca\") pod \"ingress-operator-5b745b69d9-485km\" (UID: \"1fd832b4-de40-4266-93fb-3682eeb9dd3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.431265 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ad7b333-6328-41ea-a81d-bce9790b185a-audit-dir\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.437565 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.437705 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.440079 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c029599e-5014-4874-917f-076635849451-registry-certificates\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.449246 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.449611 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.451059 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14efaf39-985f-45ea-ab79-0b8b2044c7f7-serving-cert\") pod \"route-controller-manager-6576b87f9c-29p6h\" (UID: \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.451581 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8437deca-adf5-4648-9abe-2c1c6376d07b-images\") pod \"machine-api-operator-5694c8668f-699tj\" (UID: \"8437deca-adf5-4648-9abe-2c1c6376d07b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.452093 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14efaf39-985f-45ea-ab79-0b8b2044c7f7-config\") pod \"route-controller-manager-6576b87f9c-29p6h\" (UID: \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.453183 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.454792 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.457533 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14efaf39-985f-45ea-ab79-0b8b2044c7f7-client-ca\") pod \"route-controller-manager-6576b87f9c-29p6h\" (UID: \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.463674 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c029599e-5014-4874-917f-076635849451-installation-pull-secrets\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.468845 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.470727 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-registry-tls\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.470982 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.472829 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.472555 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1fd832b4-de40-4266-93fb-3682eeb9dd3e-metrics-tls\") pod \"ingress-operator-5b745b69d9-485km\" (UID: \"1fd832b4-de40-4266-93fb-3682eeb9dd3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" Feb 14 04:11:47 crc kubenswrapper[4867]: E0214 04:11:47.475091 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:47.975051522 +0000 UTC m=+140.055988836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.475939 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-t8bst"] Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.477262 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8qkg2"] Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.479154 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf64k\" (UniqueName: \"kubernetes.io/projected/0ad7b333-6328-41ea-a81d-bce9790b185a-kube-api-access-tf64k\") pod \"oauth-openshift-558db77b4-c65kr\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.491994 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkhp4\" (UniqueName: \"kubernetes.io/projected/8437deca-adf5-4648-9abe-2c1c6376d07b-kube-api-access-wkhp4\") pod \"machine-api-operator-5694c8668f-699tj\" (UID: \"8437deca-adf5-4648-9abe-2c1c6376d07b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.499609 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1fd832b4-de40-4266-93fb-3682eeb9dd3e-bound-sa-token\") pod \"ingress-operator-5b745b69d9-485km\" (UID: \"1fd832b4-de40-4266-93fb-3682eeb9dd3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519321 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkjjw\" (UniqueName: \"kubernetes.io/projected/0d05475f-b787-49dc-8a0b-c98e47f40a3b-kube-api-access-nkjjw\") pod \"machine-config-server-sz8l8\" (UID: \"0d05475f-b787-49dc-8a0b-c98e47f40a3b\") " pod="openshift-machine-config-operator/machine-config-server-sz8l8" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519363 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec0d5c79-9e98-4f09-a336-9c284ba81d82-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6tvm5\" (UID: \"ec0d5c79-9e98-4f09-a336-9c284ba81d82\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519397 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-secret-volume\") pod \"collect-profiles-29517360-jfvsd\" (UID: \"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519425 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7cedc5a6-929b-43ca-a8b0-6dca555ca455-csi-data-dir\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519467 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1b196c26-84a1-408f-913b-eb50572102cf-apiservice-cert\") pod \"packageserver-d55dfcdfc-s94ht\" (UID: \"1b196c26-84a1-408f-913b-eb50572102cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519484 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4b71d414-e6bf-4f51-a808-1938c1edf207-default-certificate\") pod \"router-default-5444994796-qlkzp\" (UID: \"4b71d414-e6bf-4f51-a808-1938c1edf207\") " pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519546 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjwtw\" (UniqueName: \"kubernetes.io/projected/d74f081b-fe53-4642-8340-a8e602c627f1-kube-api-access-kjwtw\") pod \"service-ca-9c57cc56f-9kgzh\" (UID: \"d74f081b-fe53-4642-8340-a8e602c627f1\") " pod="openshift-service-ca/service-ca-9c57cc56f-9kgzh" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519566 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whx59\" (UniqueName: \"kubernetes.io/projected/4b71d414-e6bf-4f51-a808-1938c1edf207-kube-api-access-whx59\") pod \"router-default-5444994796-qlkzp\" (UID: \"4b71d414-e6bf-4f51-a808-1938c1edf207\") " pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519603 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mkw9h\" (UID: \"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519626 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b1dba42c-e410-49fd-8c48-449fca5d65dc-profile-collector-cert\") pod \"catalog-operator-68c6474976-dgp2v\" (UID: \"b1dba42c-e410-49fd-8c48-449fca5d65dc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519656 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b71d414-e6bf-4f51-a808-1938c1edf207-metrics-certs\") pod \"router-default-5444994796-qlkzp\" (UID: \"4b71d414-e6bf-4f51-a808-1938c1edf207\") " pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519677 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519699 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr9gw\" (UniqueName: \"kubernetes.io/projected/89db71f1-1a8b-4c57-9a3d-eb725060aee9-kube-api-access-rr9gw\") pod \"control-plane-machine-set-operator-78cbb6b69f-f47sx\" (UID: \"89db71f1-1a8b-4c57-9a3d-eb725060aee9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f47sx" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519716 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zw4m\" (UniqueName: \"kubernetes.io/projected/b1dba42c-e410-49fd-8c48-449fca5d65dc-kube-api-access-4zw4m\") pod \"catalog-operator-68c6474976-dgp2v\" (UID: \"b1dba42c-e410-49fd-8c48-449fca5d65dc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519736 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/46664b60-c0df-4869-9304-cec4de385a86-srv-cert\") pod \"olm-operator-6b444d44fb-tcss9\" (UID: \"46664b60-c0df-4869-9304-cec4de385a86\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519758 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq2jw\" (UniqueName: \"kubernetes.io/projected/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-kube-api-access-gq2jw\") pod \"marketplace-operator-79b997595-mkw9h\" (UID: \"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519776 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6dn8\" (UniqueName: \"kubernetes.io/projected/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-kube-api-access-s6dn8\") pod \"collect-profiles-29517360-jfvsd\" (UID: \"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519797 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs9hv\" (UniqueName: \"kubernetes.io/projected/541a6523-92f6-477b-9d35-a3a0074f5de3-kube-api-access-cs9hv\") pod \"dns-default-gc8sl\" (UID: \"541a6523-92f6-477b-9d35-a3a0074f5de3\") " pod="openshift-dns/dns-default-gc8sl" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519821 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1b196c26-84a1-408f-913b-eb50572102cf-webhook-cert\") pod \"packageserver-d55dfcdfc-s94ht\" (UID: \"1b196c26-84a1-408f-913b-eb50572102cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519846 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/46664b60-c0df-4869-9304-cec4de385a86-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tcss9\" (UID: \"46664b60-c0df-4869-9304-cec4de385a86\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519864 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d74f081b-fe53-4642-8340-a8e602c627f1-signing-key\") pod \"service-ca-9c57cc56f-9kgzh\" (UID: \"d74f081b-fe53-4642-8340-a8e602c627f1\") " pod="openshift-service-ca/service-ca-9c57cc56f-9kgzh" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519886 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d74f081b-fe53-4642-8340-a8e602c627f1-signing-cabundle\") pod \"service-ca-9c57cc56f-9kgzh\" (UID: \"d74f081b-fe53-4642-8340-a8e602c627f1\") " pod="openshift-service-ca/service-ca-9c57cc56f-9kgzh" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519905 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d3658855-0c06-490f-9bcc-33de7069178e-cert\") pod \"ingress-canary-8ftf5\" (UID: \"d3658855-0c06-490f-9bcc-33de7069178e\") " pod="openshift-ingress-canary/ingress-canary-8ftf5" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519925 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd4dbaf5-45ee-4171-b6b9-7deba44931ff-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t6c97\" (UID: \"dd4dbaf5-45ee-4171-b6b9-7deba44931ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519949 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc7cr\" (UniqueName: \"kubernetes.io/projected/d3658855-0c06-490f-9bcc-33de7069178e-kube-api-access-zc7cr\") pod \"ingress-canary-8ftf5\" (UID: \"d3658855-0c06-490f-9bcc-33de7069178e\") " pod="openshift-ingress-canary/ingress-canary-8ftf5" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519966 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0d05475f-b787-49dc-8a0b-c98e47f40a3b-certs\") pod \"machine-config-server-sz8l8\" (UID: \"0d05475f-b787-49dc-8a0b-c98e47f40a3b\") " pod="openshift-machine-config-operator/machine-config-server-sz8l8" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.519985 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/89db71f1-1a8b-4c57-9a3d-eb725060aee9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-f47sx\" (UID: \"89db71f1-1a8b-4c57-9a3d-eb725060aee9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f47sx" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520004 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs22v\" (UniqueName: \"kubernetes.io/projected/9a16b0f1-4ef6-457a-a766-a0cc2181501f-kube-api-access-gs22v\") pod \"migrator-59844c95c7-5k4wz\" (UID: \"9a16b0f1-4ef6-457a-a766-a0cc2181501f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5k4wz" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520025 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02d4609f-f699-4ac2-bc41-752b879681ba-config\") pod \"service-ca-operator-777779d784-rxprp\" (UID: \"02d4609f-f699-4ac2-bc41-752b879681ba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rxprp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520042 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fx4z\" (UniqueName: \"kubernetes.io/projected/46664b60-c0df-4869-9304-cec4de385a86-kube-api-access-7fx4z\") pod \"olm-operator-6b444d44fb-tcss9\" (UID: \"46664b60-c0df-4869-9304-cec4de385a86\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520062 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7cedc5a6-929b-43ca-a8b0-6dca555ca455-registration-dir\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520078 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd4dbaf5-45ee-4171-b6b9-7deba44931ff-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t6c97\" (UID: \"dd4dbaf5-45ee-4171-b6b9-7deba44931ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520097 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7cedc5a6-929b-43ca-a8b0-6dca555ca455-plugins-dir\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520115 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stznr\" (UniqueName: \"kubernetes.io/projected/a0c7654d-1553-4b68-8af4-253f77d7c657-kube-api-access-stznr\") pod \"package-server-manager-789f6589d5-rv8cb\" (UID: \"a0c7654d-1553-4b68-8af4-253f77d7c657\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520134 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzxjb\" (UniqueName: \"kubernetes.io/projected/7cedc5a6-929b-43ca-a8b0-6dca555ca455-kube-api-access-hzxjb\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520153 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0c7654d-1553-4b68-8af4-253f77d7c657-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rv8cb\" (UID: \"a0c7654d-1553-4b68-8af4-253f77d7c657\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520174 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7cedc5a6-929b-43ca-a8b0-6dca555ca455-mountpoint-dir\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520196 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd4dbaf5-45ee-4171-b6b9-7deba44931ff-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t6c97\" (UID: \"dd4dbaf5-45ee-4171-b6b9-7deba44931ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520224 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b71d414-e6bf-4f51-a808-1938c1edf207-service-ca-bundle\") pod \"router-default-5444994796-qlkzp\" (UID: \"4b71d414-e6bf-4f51-a808-1938c1edf207\") " pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520242 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02d4609f-f699-4ac2-bc41-752b879681ba-serving-cert\") pod \"service-ca-operator-777779d784-rxprp\" (UID: \"02d4609f-f699-4ac2-bc41-752b879681ba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rxprp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520262 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7cedc5a6-929b-43ca-a8b0-6dca555ca455-socket-dir\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520286 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkqlf\" (UniqueName: \"kubernetes.io/projected/02d4609f-f699-4ac2-bc41-752b879681ba-kube-api-access-bkqlf\") pod \"service-ca-operator-777779d784-rxprp\" (UID: \"02d4609f-f699-4ac2-bc41-752b879681ba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rxprp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520308 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-config-volume\") pod \"collect-profiles-29517360-jfvsd\" (UID: \"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520332 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4b71d414-e6bf-4f51-a808-1938c1edf207-stats-auth\") pod \"router-default-5444994796-qlkzp\" (UID: \"4b71d414-e6bf-4f51-a808-1938c1edf207\") " pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520351 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/541a6523-92f6-477b-9d35-a3a0074f5de3-metrics-tls\") pod \"dns-default-gc8sl\" (UID: \"541a6523-92f6-477b-9d35-a3a0074f5de3\") " pod="openshift-dns/dns-default-gc8sl" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520369 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec0d5c79-9e98-4f09-a336-9c284ba81d82-config\") pod \"kube-controller-manager-operator-78b949d7b-6tvm5\" (UID: \"ec0d5c79-9e98-4f09-a336-9c284ba81d82\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520387 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b1dba42c-e410-49fd-8c48-449fca5d65dc-srv-cert\") pod \"catalog-operator-68c6474976-dgp2v\" (UID: \"b1dba42c-e410-49fd-8c48-449fca5d65dc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520406 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt2g9\" (UniqueName: \"kubernetes.io/projected/1b196c26-84a1-408f-913b-eb50572102cf-kube-api-access-pt2g9\") pod \"packageserver-d55dfcdfc-s94ht\" (UID: \"1b196c26-84a1-408f-913b-eb50572102cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520424 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1b196c26-84a1-408f-913b-eb50572102cf-tmpfs\") pod \"packageserver-d55dfcdfc-s94ht\" (UID: \"1b196c26-84a1-408f-913b-eb50572102cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520441 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0d05475f-b787-49dc-8a0b-c98e47f40a3b-node-bootstrap-token\") pod \"machine-config-server-sz8l8\" (UID: \"0d05475f-b787-49dc-8a0b-c98e47f40a3b\") " pod="openshift-machine-config-operator/machine-config-server-sz8l8" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520457 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/541a6523-92f6-477b-9d35-a3a0074f5de3-config-volume\") pod \"dns-default-gc8sl\" (UID: \"541a6523-92f6-477b-9d35-a3a0074f5de3\") " pod="openshift-dns/dns-default-gc8sl" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520484 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mkw9h\" (UID: \"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.520520 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec0d5c79-9e98-4f09-a336-9c284ba81d82-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6tvm5\" (UID: \"ec0d5c79-9e98-4f09-a336-9c284ba81d82\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.522024 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec0d5c79-9e98-4f09-a336-9c284ba81d82-config\") pod \"kube-controller-manager-operator-78b949d7b-6tvm5\" (UID: \"ec0d5c79-9e98-4f09-a336-9c284ba81d82\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.522207 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7cedc5a6-929b-43ca-a8b0-6dca555ca455-socket-dir\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.522347 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7cedc5a6-929b-43ca-a8b0-6dca555ca455-plugins-dir\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.522862 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-config-volume\") pod \"collect-profiles-29517360-jfvsd\" (UID: \"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.523299 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02d4609f-f699-4ac2-bc41-752b879681ba-config\") pod \"service-ca-operator-777779d784-rxprp\" (UID: \"02d4609f-f699-4ac2-bc41-752b879681ba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rxprp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.523466 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7cedc5a6-929b-43ca-a8b0-6dca555ca455-registration-dir\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.524453 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec0d5c79-9e98-4f09-a336-9c284ba81d82-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6tvm5\" (UID: \"ec0d5c79-9e98-4f09-a336-9c284ba81d82\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.524623 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7cedc5a6-929b-43ca-a8b0-6dca555ca455-mountpoint-dir\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.525296 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02d4609f-f699-4ac2-bc41-752b879681ba-serving-cert\") pod \"service-ca-operator-777779d784-rxprp\" (UID: \"02d4609f-f699-4ac2-bc41-752b879681ba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rxprp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.525557 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b71d414-e6bf-4f51-a808-1938c1edf207-service-ca-bundle\") pod \"router-default-5444994796-qlkzp\" (UID: \"4b71d414-e6bf-4f51-a808-1938c1edf207\") " pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.526997 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7cedc5a6-929b-43ca-a8b0-6dca555ca455-csi-data-dir\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.527268 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/0d05475f-b787-49dc-8a0b-c98e47f40a3b-certs\") pod \"machine-config-server-sz8l8\" (UID: \"0d05475f-b787-49dc-8a0b-c98e47f40a3b\") " pod="openshift-machine-config-operator/machine-config-server-sz8l8" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.527318 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mkw9h\" (UID: \"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" Feb 14 04:11:47 crc kubenswrapper[4867]: E0214 04:11:47.527564 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:48.027551411 +0000 UTC m=+140.108488725 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.527973 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/541a6523-92f6-477b-9d35-a3a0074f5de3-config-volume\") pod \"dns-default-gc8sl\" (UID: \"541a6523-92f6-477b-9d35-a3a0074f5de3\") " pod="openshift-dns/dns-default-gc8sl" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.528130 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1b196c26-84a1-408f-913b-eb50572102cf-tmpfs\") pod \"packageserver-d55dfcdfc-s94ht\" (UID: \"1b196c26-84a1-408f-913b-eb50572102cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.528775 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d74f081b-fe53-4642-8340-a8e602c627f1-signing-cabundle\") pod \"service-ca-9c57cc56f-9kgzh\" (UID: \"d74f081b-fe53-4642-8340-a8e602c627f1\") " pod="openshift-service-ca/service-ca-9c57cc56f-9kgzh" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.529685 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/541a6523-92f6-477b-9d35-a3a0074f5de3-metrics-tls\") pod \"dns-default-gc8sl\" (UID: \"541a6523-92f6-477b-9d35-a3a0074f5de3\") " pod="openshift-dns/dns-default-gc8sl" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.530036 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1b196c26-84a1-408f-913b-eb50572102cf-apiservice-cert\") pod \"packageserver-d55dfcdfc-s94ht\" (UID: \"1b196c26-84a1-408f-913b-eb50572102cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.532637 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4b71d414-e6bf-4f51-a808-1938c1edf207-stats-auth\") pod \"router-default-5444994796-qlkzp\" (UID: \"4b71d414-e6bf-4f51-a808-1938c1edf207\") " pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.532698 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd4dbaf5-45ee-4171-b6b9-7deba44931ff-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t6c97\" (UID: \"dd4dbaf5-45ee-4171-b6b9-7deba44931ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.533319 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/46664b60-c0df-4869-9304-cec4de385a86-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tcss9\" (UID: \"46664b60-c0df-4869-9304-cec4de385a86\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.533365 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4b71d414-e6bf-4f51-a808-1938c1edf207-default-certificate\") pod \"router-default-5444994796-qlkzp\" (UID: \"4b71d414-e6bf-4f51-a808-1938c1edf207\") " pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.535842 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/89db71f1-1a8b-4c57-9a3d-eb725060aee9-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-f47sx\" (UID: \"89db71f1-1a8b-4c57-9a3d-eb725060aee9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f47sx" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.536275 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0c7654d-1553-4b68-8af4-253f77d7c657-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rv8cb\" (UID: \"a0c7654d-1553-4b68-8af4-253f77d7c657\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.540137 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mkw9h\" (UID: \"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.540249 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/0d05475f-b787-49dc-8a0b-c98e47f40a3b-node-bootstrap-token\") pod \"machine-config-server-sz8l8\" (UID: \"0d05475f-b787-49dc-8a0b-c98e47f40a3b\") " pod="openshift-machine-config-operator/machine-config-server-sz8l8" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.540470 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd4dbaf5-45ee-4171-b6b9-7deba44931ff-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t6c97\" (UID: \"dd4dbaf5-45ee-4171-b6b9-7deba44931ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.540581 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b1dba42c-e410-49fd-8c48-449fca5d65dc-srv-cert\") pod \"catalog-operator-68c6474976-dgp2v\" (UID: \"b1dba42c-e410-49fd-8c48-449fca5d65dc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.543547 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b1dba42c-e410-49fd-8c48-449fca5d65dc-profile-collector-cert\") pod \"catalog-operator-68c6474976-dgp2v\" (UID: \"b1dba42c-e410-49fd-8c48-449fca5d65dc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.543968 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-secret-volume\") pod \"collect-profiles-29517360-jfvsd\" (UID: \"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.544257 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/46664b60-c0df-4869-9304-cec4de385a86-srv-cert\") pod \"olm-operator-6b444d44fb-tcss9\" (UID: \"46664b60-c0df-4869-9304-cec4de385a86\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.545041 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d3658855-0c06-490f-9bcc-33de7069178e-cert\") pod \"ingress-canary-8ftf5\" (UID: \"d3658855-0c06-490f-9bcc-33de7069178e\") " pod="openshift-ingress-canary/ingress-canary-8ftf5" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.545489 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d74f081b-fe53-4642-8340-a8e602c627f1-signing-key\") pod \"service-ca-9c57cc56f-9kgzh\" (UID: \"d74f081b-fe53-4642-8340-a8e602c627f1\") " pod="openshift-service-ca/service-ca-9c57cc56f-9kgzh" Feb 14 04:11:47 crc kubenswrapper[4867]: W0214 04:11:47.556166 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod894233bb_65ed_4cdd_ac61_7a8bd8f66140.slice/crio-ead0db96b1b320d87e57ce68f5ba9c92c1e3e7abf4498321b5f8a82d424e007a WatchSource:0}: Error finding container ead0db96b1b320d87e57ce68f5ba9c92c1e3e7abf4498321b5f8a82d424e007a: Status 404 returned error can't find the container with id ead0db96b1b320d87e57ce68f5ba9c92c1e3e7abf4498321b5f8a82d424e007a Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.556488 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1b196c26-84a1-408f-913b-eb50572102cf-webhook-cert\") pod \"packageserver-d55dfcdfc-s94ht\" (UID: \"1b196c26-84a1-408f-913b-eb50572102cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.556642 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4b71d414-e6bf-4f51-a808-1938c1edf207-metrics-certs\") pod \"router-default-5444994796-qlkzp\" (UID: \"4b71d414-e6bf-4f51-a808-1938c1edf207\") " pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.566948 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-bound-sa-token\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.570828 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2"] Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.572974 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z26vn\" (UniqueName: \"kubernetes.io/projected/dc723269-8ee6-4236-9eaa-169a00d76442-kube-api-access-z26vn\") pod \"console-operator-58897d9998-htv2n\" (UID: \"dc723269-8ee6-4236-9eaa-169a00d76442\") " pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.578226 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmbh6\" (UniqueName: \"kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-kube-api-access-bmbh6\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.586473 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pctg8"] Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.597805 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2kd6\" (UniqueName: \"kubernetes.io/projected/14efaf39-985f-45ea-ab79-0b8b2044c7f7-kube-api-access-q2kd6\") pod \"route-controller-manager-6576b87f9c-29p6h\" (UID: \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.613136 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mznjl\" (UniqueName: \"kubernetes.io/projected/1fd832b4-de40-4266-93fb-3682eeb9dd3e-kube-api-access-mznjl\") pod \"ingress-operator-5b745b69d9-485km\" (UID: \"1fd832b4-de40-4266-93fb-3682eeb9dd3e\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.621738 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:47 crc kubenswrapper[4867]: E0214 04:11:47.622185 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:48.122160193 +0000 UTC m=+140.203097507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.634711 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" Feb 14 04:11:47 crc kubenswrapper[4867]: W0214 04:11:47.643403 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a8f75ff_3558_4d7b_8adb_722a732d0633.slice/crio-702bf3960d4dd807ff95a87ec715e2a8341224aa7f7a185ffa011415c4aa6f9c WatchSource:0}: Error finding container 702bf3960d4dd807ff95a87ec715e2a8341224aa7f7a185ffa011415c4aa6f9c: Status 404 returned error can't find the container with id 702bf3960d4dd807ff95a87ec715e2a8341224aa7f7a185ffa011415c4aa6f9c Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.650489 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.694477 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-c4c52"] Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.695114 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w"] Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.698065 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkjjw\" (UniqueName: \"kubernetes.io/projected/0d05475f-b787-49dc-8a0b-c98e47f40a3b-kube-api-access-nkjjw\") pod \"machine-config-server-sz8l8\" (UID: \"0d05475f-b787-49dc-8a0b-c98e47f40a3b\") " pod="openshift-machine-config-operator/machine-config-server-sz8l8" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.701981 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.720750 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkqlf\" (UniqueName: \"kubernetes.io/projected/02d4609f-f699-4ac2-bc41-752b879681ba-kube-api-access-bkqlf\") pod \"service-ca-operator-777779d784-rxprp\" (UID: \"02d4609f-f699-4ac2-bc41-752b879681ba\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rxprp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.724796 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: E0214 04:11:47.725119 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:48.225108738 +0000 UTC m=+140.306046042 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.730260 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh"] Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.734733 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs22v\" (UniqueName: \"kubernetes.io/projected/9a16b0f1-4ef6-457a-a766-a0cc2181501f-kube-api-access-gs22v\") pod \"migrator-59844c95c7-5k4wz\" (UID: \"9a16b0f1-4ef6-457a-a766-a0cc2181501f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5k4wz" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.744101 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-x9sjv" event={"ID":"72546cbc-3499-4110-b0e4-58beab7cc8a5","Type":"ContainerStarted","Data":"ec4665aac003c1b4e7cba85ff048914da8febde16b0034c9afb5b3fb2a36029a"} Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.744890 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" event={"ID":"1261994f-a993-4ffc-851a-dfce5bcc10b1","Type":"ContainerStarted","Data":"2bbbfd0f929a463b3834210b817fb454c9c5152759f36a425668d7478a36ca3a"} Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.745497 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv" event={"ID":"1815da32-cba4-41f4-80ca-45a750c7e93f","Type":"ContainerStarted","Data":"813e40a2e1867731aba1c9c1cac2258dab16eefb257f8f867e54e1c39dbd1222"} Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.747199 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-t8bst" event={"ID":"0ccfed17-f056-4bbe-8ec3-cdd31f37be63","Type":"ContainerStarted","Data":"f16afef3d7e808dbf734065cea30fefc8ce32136d50bcf987d25ea20a8ea7a54"} Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.748197 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-p69vd"] Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.749985 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" event={"ID":"894233bb-65ed-4cdd-ac61-7a8bd8f66140","Type":"ContainerStarted","Data":"ead0db96b1b320d87e57ce68f5ba9c92c1e3e7abf4498321b5f8a82d424e007a"} Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.750770 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" event={"ID":"835c6d49-e42e-444a-a276-fb9f064fdbda","Type":"ContainerStarted","Data":"144c6c8b1c76f545a725545d137202c6089bbe081caa00b695421ad1383b769d"} Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.753437 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" event={"ID":"d58c6e7c-e0bc-4833-ab34-348c03f75da7","Type":"ContainerStarted","Data":"c5c4776deb3975945db7e0cf31af409b0ccecd9b88acf8d033c946f648493142"} Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.755756 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fx4z\" (UniqueName: \"kubernetes.io/projected/46664b60-c0df-4869-9304-cec4de385a86-kube-api-access-7fx4z\") pod \"olm-operator-6b444d44fb-tcss9\" (UID: \"46664b60-c0df-4869-9304-cec4de385a86\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.760731 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" event={"ID":"07dd9173-fdfe-4edb-821b-37c94116b53e","Type":"ContainerStarted","Data":"c43a26497795da97ad6a6c4586b62e12ae1ccaaa8dd33d4cfe17199345411003"} Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.763439 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5k4wz" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.764957 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx" event={"ID":"ccd97956-aef1-45cf-9475-02928c866124","Type":"ContainerStarted","Data":"ec6dcbf0f8a230a42d760e895824929c89848228caceaa01f075507289d58748"} Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.765428 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr9gw\" (UniqueName: \"kubernetes.io/projected/89db71f1-1a8b-4c57-9a3d-eb725060aee9-kube-api-access-rr9gw\") pod \"control-plane-machine-set-operator-78cbb6b69f-f47sx\" (UID: \"89db71f1-1a8b-4c57-9a3d-eb725060aee9\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f47sx" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.768962 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8"] Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.769499 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" event={"ID":"6a8f75ff-3558-4d7b-8adb-722a732d0633","Type":"ContainerStarted","Data":"702bf3960d4dd807ff95a87ec715e2a8341224aa7f7a185ffa011415c4aa6f9c"} Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.769726 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec0d5c79-9e98-4f09-a336-9c284ba81d82-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6tvm5\" (UID: \"ec0d5c79-9e98-4f09-a336-9c284ba81d82\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.771199 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pmlgc"] Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.775709 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjwtw\" (UniqueName: \"kubernetes.io/projected/d74f081b-fe53-4642-8340-a8e602c627f1-kube-api-access-kjwtw\") pod \"service-ca-9c57cc56f-9kgzh\" (UID: \"d74f081b-fe53-4642-8340-a8e602c627f1\") " pod="openshift-service-ca/service-ca-9c57cc56f-9kgzh" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.785481 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.793693 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rxprp" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.800247 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stznr\" (UniqueName: \"kubernetes.io/projected/a0c7654d-1553-4b68-8af4-253f77d7c657-kube-api-access-stznr\") pod \"package-server-manager-789f6589d5-rv8cb\" (UID: \"a0c7654d-1553-4b68-8af4-253f77d7c657\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" Feb 14 04:11:47 crc kubenswrapper[4867]: W0214 04:11:47.801730 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb63883f_65f5_4107_877a_ff786d6c00f9.slice/crio-0bfaa5034c5f4aa419ca6cadf9c2423257fac17593840dedc0a8810563cfdfe4 WatchSource:0}: Error finding container 0bfaa5034c5f4aa419ca6cadf9c2423257fac17593840dedc0a8810563cfdfe4: Status 404 returned error can't find the container with id 0bfaa5034c5f4aa419ca6cadf9c2423257fac17593840dedc0a8810563cfdfe4 Feb 14 04:11:47 crc kubenswrapper[4867]: W0214 04:11:47.805662 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1f6fd76_f362_495f_969d_a644f072552f.slice/crio-607ec17b312d47e50fa406a7fff1d088a74c699097b2d55b67b19d4ae24f518b WatchSource:0}: Error finding container 607ec17b312d47e50fa406a7fff1d088a74c699097b2d55b67b19d4ae24f518b: Status 404 returned error can't find the container with id 607ec17b312d47e50fa406a7fff1d088a74c699097b2d55b67b19d4ae24f518b Feb 14 04:11:47 crc kubenswrapper[4867]: W0214 04:11:47.807128 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd46c3923_f64c_42de_b84c_98bc872f5de6.slice/crio-227104c829d767a5114f57777c951690c6c9a1f5b806413a08b5f9308019149a WatchSource:0}: Error finding container 227104c829d767a5114f57777c951690c6c9a1f5b806413a08b5f9308019149a: Status 404 returned error can't find the container with id 227104c829d767a5114f57777c951690c6c9a1f5b806413a08b5f9308019149a Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.817890 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc7cr\" (UniqueName: \"kubernetes.io/projected/d3658855-0c06-490f-9bcc-33de7069178e-kube-api-access-zc7cr\") pod \"ingress-canary-8ftf5\" (UID: \"d3658855-0c06-490f-9bcc-33de7069178e\") " pod="openshift-ingress-canary/ingress-canary-8ftf5" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.820209 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f47sx" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.826078 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:47 crc kubenswrapper[4867]: E0214 04:11:47.826259 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:48.326231936 +0000 UTC m=+140.407169250 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.826445 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.826466 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:11:47 crc kubenswrapper[4867]: E0214 04:11:47.826865 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:48.326843932 +0000 UTC m=+140.407781256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.830845 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-sz8l8" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.835235 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zw4m\" (UniqueName: \"kubernetes.io/projected/b1dba42c-e410-49fd-8c48-449fca5d65dc-kube-api-access-4zw4m\") pod \"catalog-operator-68c6474976-dgp2v\" (UID: \"b1dba42c-e410-49fd-8c48-449fca5d65dc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.841671 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-ccg6j"] Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.854282 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8ftf5" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.855203 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whx59\" (UniqueName: \"kubernetes.io/projected/4b71d414-e6bf-4f51-a808-1938c1edf207-kube-api-access-whx59\") pod \"router-default-5444994796-qlkzp\" (UID: \"4b71d414-e6bf-4f51-a808-1938c1edf207\") " pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:47 crc kubenswrapper[4867]: W0214 04:11:47.862872 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22c4dfcc_144e_40cd_bed2_dc28c210a130.slice/crio-263364339aa134cb4836f537b5988d857bea9c4594e07ba03a259b85c85888f6 WatchSource:0}: Error finding container 263364339aa134cb4836f537b5988d857bea9c4594e07ba03a259b85c85888f6: Status 404 returned error can't find the container with id 263364339aa134cb4836f537b5988d857bea9c4594e07ba03a259b85c85888f6 Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.875761 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt2g9\" (UniqueName: \"kubernetes.io/projected/1b196c26-84a1-408f-913b-eb50572102cf-kube-api-access-pt2g9\") pod \"packageserver-d55dfcdfc-s94ht\" (UID: \"1b196c26-84a1-408f-913b-eb50572102cf\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.894515 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-485km"] Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.896019 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzxjb\" (UniqueName: \"kubernetes.io/projected/7cedc5a6-929b-43ca-a8b0-6dca555ca455-kube-api-access-hzxjb\") pod \"csi-hostpathplugin-pzj5s\" (UID: \"7cedc5a6-929b-43ca-a8b0-6dca555ca455\") " pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.917491 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq2jw\" (UniqueName: \"kubernetes.io/projected/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-kube-api-access-gq2jw\") pod \"marketplace-operator-79b997595-mkw9h\" (UID: \"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2\") " pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.927792 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:47 crc kubenswrapper[4867]: E0214 04:11:47.927975 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:48.42794729 +0000 UTC m=+140.508884604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.928143 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:47 crc kubenswrapper[4867]: E0214 04:11:47.928568 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:48.428556145 +0000 UTC m=+140.509493459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.937602 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6dn8\" (UniqueName: \"kubernetes.io/projected/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-kube-api-access-s6dn8\") pod \"collect-profiles-29517360-jfvsd\" (UID: \"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.953828 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs9hv\" (UniqueName: \"kubernetes.io/projected/541a6523-92f6-477b-9d35-a3a0074f5de3-kube-api-access-cs9hv\") pod \"dns-default-gc8sl\" (UID: \"541a6523-92f6-477b-9d35-a3a0074f5de3\") " pod="openshift-dns/dns-default-gc8sl" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.975975 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd4dbaf5-45ee-4171-b6b9-7deba44931ff-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t6c97\" (UID: \"dd4dbaf5-45ee-4171-b6b9-7deba44931ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97" Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.988699 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-l6gq7"] Feb 14 04:11:47 crc kubenswrapper[4867]: I0214 04:11:47.992000 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct"] Feb 14 04:11:48 crc kubenswrapper[4867]: W0214 04:11:48.014294 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fd832b4_de40_4266_93fb_3682eeb9dd3e.slice/crio-eeb2706f83e48e704a97b22ba18e66fc2203ad21a3e5aaa8b32f2186829ae52e WatchSource:0}: Error finding container eeb2706f83e48e704a97b22ba18e66fc2203ad21a3e5aaa8b32f2186829ae52e: Status 404 returned error can't find the container with id eeb2706f83e48e704a97b22ba18e66fc2203ad21a3e5aaa8b32f2186829ae52e Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.028917 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.029321 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-9kgzh" Feb 14 04:11:48 crc kubenswrapper[4867]: E0214 04:11:48.029492 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:48.529474968 +0000 UTC m=+140.610412282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.029754 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:48 crc kubenswrapper[4867]: E0214 04:11:48.030051 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:48.530043383 +0000 UTC m=+140.610980697 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:48 crc kubenswrapper[4867]: W0214 04:11:48.033539 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d8ea50d_6822_425a_8eac_6311c8537eb7.slice/crio-50102f8a422b34bb67884ff8b07519f02caa657555708d3170f9b4f1160b2d78 WatchSource:0}: Error finding container 50102f8a422b34bb67884ff8b07519f02caa657555708d3170f9b4f1160b2d78: Status 404 returned error can't find the container with id 50102f8a422b34bb67884ff8b07519f02caa657555708d3170f9b4f1160b2d78 Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.036461 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.043514 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.045881 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c65kr"] Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.052354 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5" Feb 14 04:11:48 crc kubenswrapper[4867]: W0214 04:11:48.053328 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9bcb9a2_1128_4c6b_80b1_47afd1a46511.slice/crio-9ea62d91d1858319052e207e1983303fd5ae8466b8ddde272b8623ca28891674 WatchSource:0}: Error finding container 9ea62d91d1858319052e207e1983303fd5ae8466b8ddde272b8623ca28891674: Status 404 returned error can't find the container with id 9ea62d91d1858319052e207e1983303fd5ae8466b8ddde272b8623ca28891674 Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.056707 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.071886 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.078397 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97" Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.086723 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.101932 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.109404 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gc8sl" Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.125257 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.130279 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:48 crc kubenswrapper[4867]: E0214 04:11:48.130623 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:48.630608947 +0000 UTC m=+140.711546261 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.146185 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.166498 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-htv2n"] Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.232752 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:48 crc kubenswrapper[4867]: E0214 04:11:48.233076 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:48.733065829 +0000 UTC m=+140.814003143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.305354 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-699tj"] Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.334104 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:48 crc kubenswrapper[4867]: E0214 04:11:48.334275 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:48.834242749 +0000 UTC m=+140.915180063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.334404 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:48 crc kubenswrapper[4867]: E0214 04:11:48.334685 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:48.83467373 +0000 UTC m=+140.915611044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:48 crc kubenswrapper[4867]: W0214 04:11:48.342472 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc723269_8ee6_4236_9eaa_169a00d76442.slice/crio-c6603d0502e3bf3a96f69d86db4669bec69431826e76db3e19e87530f2205a4c WatchSource:0}: Error finding container c6603d0502e3bf3a96f69d86db4669bec69431826e76db3e19e87530f2205a4c: Status 404 returned error can't find the container with id c6603d0502e3bf3a96f69d86db4669bec69431826e76db3e19e87530f2205a4c Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.348195 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-5k4wz"] Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.370149 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h"] Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.435072 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:48 crc kubenswrapper[4867]: E0214 04:11:48.435416 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:48.935391348 +0000 UTC m=+141.016328662 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.435656 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:48 crc kubenswrapper[4867]: E0214 04:11:48.435915 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:48.935904051 +0000 UTC m=+141.016841365 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.454282 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rxprp"] Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.455701 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f47sx"] Feb 14 04:11:48 crc kubenswrapper[4867]: W0214 04:11:48.465187 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a16b0f1_4ef6_457a_a766_a0cc2181501f.slice/crio-d8c00e67797b70c4d461fc731bea49b9afed652c05d67318593d72642cce6663 WatchSource:0}: Error finding container d8c00e67797b70c4d461fc731bea49b9afed652c05d67318593d72642cce6663: Status 404 returned error can't find the container with id d8c00e67797b70c4d461fc731bea49b9afed652c05d67318593d72642cce6663 Feb 14 04:11:48 crc kubenswrapper[4867]: W0214 04:11:48.491289 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02d4609f_f699_4ac2_bc41_752b879681ba.slice/crio-0c37da51818bd2a86c5c4020ae0c2e247acf00e1a8d050fbdd365928ff64f107 WatchSource:0}: Error finding container 0c37da51818bd2a86c5c4020ae0c2e247acf00e1a8d050fbdd365928ff64f107: Status 404 returned error can't find the container with id 0c37da51818bd2a86c5c4020ae0c2e247acf00e1a8d050fbdd365928ff64f107 Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.534314 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8ftf5"] Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.536901 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:48 crc kubenswrapper[4867]: E0214 04:11:48.537357 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:49.037342957 +0000 UTC m=+141.118280271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.638756 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:48 crc kubenswrapper[4867]: E0214 04:11:48.639472 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:49.139459351 +0000 UTC m=+141.220396665 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.740547 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:48 crc kubenswrapper[4867]: E0214 04:11:48.740867 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:49.240851495 +0000 UTC m=+141.321788809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.842546 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:48 crc kubenswrapper[4867]: E0214 04:11:48.843217 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:49.343200265 +0000 UTC m=+141.424137579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.891888 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" event={"ID":"835c6d49-e42e-444a-a276-fb9f064fdbda","Type":"ContainerStarted","Data":"cc2f33bbd998239443d5512e9c48d9641b1036c627268f3ee030a0d1cbcb4206"} Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.897976 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f47sx" event={"ID":"89db71f1-1a8b-4c57-9a3d-eb725060aee9","Type":"ContainerStarted","Data":"b30438543696cd384ed51ec93bdf53c2b2d40d7cbe5536f977a7badcc6e3f3fe"} Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.916207 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht"] Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.919414 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-gc8sl"] Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.921249 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8" event={"ID":"77ddb26b-22ee-4a97-81ab-7e82c611ebd5","Type":"ContainerStarted","Data":"6f58856879441c20ef32c48d2b07eeb92fe9c4144e96f5d0e64cc487391dceab"} Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.921668 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8" event={"ID":"77ddb26b-22ee-4a97-81ab-7e82c611ebd5","Type":"ContainerStarted","Data":"cc79acb7ca2c05a7b6f2b1184ae328fe64a8f5e6b88328704675be85a52db37e"} Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.935583 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" event={"ID":"0ad7b333-6328-41ea-a81d-bce9790b185a","Type":"ContainerStarted","Data":"0005bb5ab795f3cb3316208372a9d4195e426c2a1f38a510bf0162032f954a9f"} Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.943682 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:48 crc kubenswrapper[4867]: E0214 04:11:48.943866 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:49.443837771 +0000 UTC m=+141.524775085 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.943978 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:48 crc kubenswrapper[4867]: E0214 04:11:48.945410 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:49.44535797 +0000 UTC m=+141.526295504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.964198 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb"] Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.964380 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pmlgc" event={"ID":"acdb1323-fec8-46fa-9f36-9b0f7f74cca4","Type":"ContainerStarted","Data":"5f6ce5ba2b04602f0c14203c86f267a1383ed602966128d4ccefac88636b0e0f"} Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.972386 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-c4c52" event={"ID":"bb63883f-65f5-4107-877a-ff786d6c00f9","Type":"ContainerStarted","Data":"63e5a177904c856ac44a70adc1fabc18b6435a4f03e0f904c50917bec344fb2b"} Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.972432 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-c4c52" event={"ID":"bb63883f-65f5-4107-877a-ff786d6c00f9","Type":"ContainerStarted","Data":"0bfaa5034c5f4aa419ca6cadf9c2423257fac17593840dedc0a8810563cfdfe4"} Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.981158 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rxprp" event={"ID":"02d4609f-f699-4ac2-bc41-752b879681ba","Type":"ContainerStarted","Data":"0c37da51818bd2a86c5c4020ae0c2e247acf00e1a8d050fbdd365928ff64f107"} Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.983552 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct" event={"ID":"6d8ea50d-6822-425a-8eac-6311c8537eb7","Type":"ContainerStarted","Data":"50102f8a422b34bb67884ff8b07519f02caa657555708d3170f9b4f1160b2d78"} Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.985047 4867 generic.go:334] "Generic (PLEG): container finished" podID="894233bb-65ed-4cdd-ac61-7a8bd8f66140" containerID="1e918a4597fc13bcf23fff6b70d5dcd093ca46273a1af1cda296479943dc1f92" exitCode=0 Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.985872 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" event={"ID":"894233bb-65ed-4cdd-ac61-7a8bd8f66140","Type":"ContainerDied","Data":"1e918a4597fc13bcf23fff6b70d5dcd093ca46273a1af1cda296479943dc1f92"} Feb 14 04:11:48 crc kubenswrapper[4867]: I0214 04:11:48.991741 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" event={"ID":"6a8f75ff-3558-4d7b-8adb-722a732d0633","Type":"ContainerStarted","Data":"e2a84ac2941e9118b0d6ca163c3b651937c952981f9423f9c36f0e1f4479d0bf"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.009042 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.009105 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.020661 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-x9sjv" Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.020699 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-x9sjv" event={"ID":"72546cbc-3499-4110-b0e4-58beab7cc8a5","Type":"ContainerStarted","Data":"6df86e37892d6555081dceb55f2b33fa3d058e82a95ff8722c4d3a8bd1c5bcb0"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.020720 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-sz8l8" event={"ID":"0d05475f-b787-49dc-8a0b-c98e47f40a3b","Type":"ContainerStarted","Data":"bc740684119f9953c31d7aa9b7d34476d57a87ba84403a05c62af5df446355d0"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.028284 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9kgzh"] Feb 14 04:11:49 crc kubenswrapper[4867]: W0214 04:11:49.028853 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b196c26_84a1_408f_913b_eb50572102cf.slice/crio-bc81bfa7a43c3207c40e6706fb2fd31e8a1cd427a12e1e87a713601dd9213e3b WatchSource:0}: Error finding container bc81bfa7a43c3207c40e6706fb2fd31e8a1cd427a12e1e87a713601dd9213e3b: Status 404 returned error can't find the container with id bc81bfa7a43c3207c40e6706fb2fd31e8a1cd427a12e1e87a713601dd9213e3b Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.032065 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" event={"ID":"d1f6fd76-f362-495f-969d-a644f072552f","Type":"ContainerStarted","Data":"66fbea02ea2b5f3c6ffdf61d25eeeee17d6b58bd4bb90aedfb7b5388f306f2b1"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.032119 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" event={"ID":"d1f6fd76-f362-495f-969d-a644f072552f","Type":"ContainerStarted","Data":"607ec17b312d47e50fa406a7fff1d088a74c699097b2d55b67b19d4ae24f518b"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.046240 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:49 crc kubenswrapper[4867]: E0214 04:11:49.046849 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:49.546816087 +0000 UTC m=+141.627753401 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.053121 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:49 crc kubenswrapper[4867]: E0214 04:11:49.054991 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:49.554971864 +0000 UTC m=+141.635909178 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.080058 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-htv2n" event={"ID":"dc723269-8ee6-4236-9eaa-169a00d76442","Type":"ContainerStarted","Data":"c6603d0502e3bf3a96f69d86db4669bec69431826e76db3e19e87530f2205a4c"} Feb 14 04:11:49 crc kubenswrapper[4867]: W0214 04:11:49.099163 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd74f081b_fe53_4642_8340_a8e602c627f1.slice/crio-6bbf18ce5c812e12910a07b611ffec21ee29c37d3d1a406668755058c6f086a2 WatchSource:0}: Error finding container 6bbf18ce5c812e12910a07b611ffec21ee29c37d3d1a406668755058c6f086a2: Status 404 returned error can't find the container with id 6bbf18ce5c812e12910a07b611ffec21ee29c37d3d1a406668755058c6f086a2 Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.130163 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-l6gq7" event={"ID":"a9bcb9a2-1128-4c6b-80b1-47afd1a46511","Type":"ContainerStarted","Data":"9ea62d91d1858319052e207e1983303fd5ae8466b8ddde272b8623ca28891674"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.139083 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv" event={"ID":"1815da32-cba4-41f4-80ca-45a750c7e93f","Type":"ContainerStarted","Data":"54533f7991dc430af26aa8af2dd88dc0fc6f065ca009bdb0a7dac8bbe30947df"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.152126 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5"] Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.155662 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:49 crc kubenswrapper[4867]: E0214 04:11:49.156297 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:49.656280457 +0000 UTC m=+141.737217771 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.160254 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx" event={"ID":"ccd97956-aef1-45cf-9475-02928c866124","Type":"ContainerStarted","Data":"7a72087c5b6144c6f3aed9ba692230758bb339399f3f613b14ed37ff2fa94e73"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.178735 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh" event={"ID":"d46c3923-f64c-42de-b84c-98bc872f5de6","Type":"ContainerStarted","Data":"74bb88ddf246c9f9f45e960909d198f3f135c09d430e61473c281c28c45bee0c"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.178773 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh" event={"ID":"d46c3923-f64c-42de-b84c-98bc872f5de6","Type":"ContainerStarted","Data":"227104c829d767a5114f57777c951690c6c9a1f5b806413a08b5f9308019149a"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.185764 4867 generic.go:334] "Generic (PLEG): container finished" podID="d58c6e7c-e0bc-4833-ab34-348c03f75da7" containerID="12da2f6592db926bc6b038b2412413441a52d11e48a0905c013aecc02bac9d5b" exitCode=0 Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.185814 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" event={"ID":"d58c6e7c-e0bc-4833-ab34-348c03f75da7","Type":"ContainerDied","Data":"12da2f6592db926bc6b038b2412413441a52d11e48a0905c013aecc02bac9d5b"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.199874 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" event={"ID":"22c4dfcc-144e-40cd-bed2-dc28c210a130","Type":"ContainerStarted","Data":"263364339aa134cb4836f537b5988d857bea9c4594e07ba03a259b85c85888f6"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.217929 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" event={"ID":"14efaf39-985f-45ea-ab79-0b8b2044c7f7","Type":"ContainerStarted","Data":"d80c060a94d17951aad5e051f55bf43d373a158b1129e1b3c3d94726f3601c49"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.222049 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" event={"ID":"1261994f-a993-4ffc-851a-dfce5bcc10b1","Type":"ContainerStarted","Data":"974151c82d92401f369f342fcb19c0e0d4a552b08f80e650f6f661183db79009"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.233731 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" event={"ID":"8437deca-adf5-4648-9abe-2c1c6376d07b","Type":"ContainerStarted","Data":"aeb08e1d2ccc4adbf42036ec7046b270415d4121c0d2775b6d94c8142ecb9b04"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.239877 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pzj5s"] Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.264068 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:49 crc kubenswrapper[4867]: E0214 04:11:49.273175 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:49.773150707 +0000 UTC m=+141.854088021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.296756 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" event={"ID":"553b1e39-c2d5-459d-a7fd-058f936804cb","Type":"ContainerStarted","Data":"b3ec6ea524af8ababe998d66f1ad7b4fd6c79fcd1e44d811fa653aa1b5766706"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.296801 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" event={"ID":"553b1e39-c2d5-459d-a7fd-058f936804cb","Type":"ContainerStarted","Data":"f51659d90d716607c500c828d381bbeb2f56b13403ec3fa1d830b9afe7e14995"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.305730 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" event={"ID":"1fd832b4-de40-4266-93fb-3682eeb9dd3e","Type":"ContainerStarted","Data":"eeb2706f83e48e704a97b22ba18e66fc2203ad21a3e5aaa8b32f2186829ae52e"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.306052 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9"] Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.309054 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-t8bst" event={"ID":"0ccfed17-f056-4bbe-8ec3-cdd31f37be63","Type":"ContainerStarted","Data":"1798ae6291d65e0cbe62da82880f9738a214a7260938d99f551bb2b6fd0ad5ff"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.309944 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mkw9h"] Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.310849 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5k4wz" event={"ID":"9a16b0f1-4ef6-457a-a766-a0cc2181501f","Type":"ContainerStarted","Data":"d8c00e67797b70c4d461fc731bea49b9afed652c05d67318593d72642cce6663"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.312942 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" event={"ID":"07dd9173-fdfe-4edb-821b-37c94116b53e","Type":"ContainerStarted","Data":"b5e5c1b68f534cc73bf83368aec1b5b6ddd64d982817b6a68fb05176cffabc6e"} Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.313266 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.327846 4867 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-pctg8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.327900 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" podUID="07dd9173-fdfe-4edb-821b-37c94116b53e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.344154 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd"] Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.365626 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:49 crc kubenswrapper[4867]: E0214 04:11:49.365801 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:49.865773369 +0000 UTC m=+141.946710683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.365937 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:49 crc kubenswrapper[4867]: E0214 04:11:49.366270 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:49.866259741 +0000 UTC m=+141.947197055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.417926 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97"] Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.453318 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v"] Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.466690 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:49 crc kubenswrapper[4867]: E0214 04:11:49.466790 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:49.966764124 +0000 UTC m=+142.047701438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.466950 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:49 crc kubenswrapper[4867]: E0214 04:11:49.468043 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:49.968029066 +0000 UTC m=+142.048966420 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.488394 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-x9sjv" podStartSLOduration=118.488370195 podStartE2EDuration="1m58.488370195s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:49.466738343 +0000 UTC m=+141.547675657" watchObservedRunningTime="2026-02-14 04:11:49.488370195 +0000 UTC m=+141.569307509" Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.490928 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-wgfm8" podStartSLOduration=118.490915159 podStartE2EDuration="1m58.490915159s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:49.48546044 +0000 UTC m=+141.566397754" watchObservedRunningTime="2026-02-14 04:11:49.490915159 +0000 UTC m=+141.571852483" Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.567754 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:49 crc kubenswrapper[4867]: E0214 04:11:49.567928 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:50.067897262 +0000 UTC m=+142.148834576 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.568092 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:49 crc kubenswrapper[4867]: E0214 04:11:49.568406 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:50.068391035 +0000 UTC m=+142.149328349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.669133 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:49 crc kubenswrapper[4867]: E0214 04:11:49.669298 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:50.169275687 +0000 UTC m=+142.250213001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.669498 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:49 crc kubenswrapper[4867]: E0214 04:11:49.669747 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:50.169739859 +0000 UTC m=+142.250677163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.771187 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:49 crc kubenswrapper[4867]: E0214 04:11:49.771964 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:50.271948944 +0000 UTC m=+142.352886258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.853900 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nmdjh" podStartSLOduration=118.853882033 podStartE2EDuration="1m58.853882033s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:49.852670132 +0000 UTC m=+141.933607456" watchObservedRunningTime="2026-02-14 04:11:49.853882033 +0000 UTC m=+141.934819347" Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.854457 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-ff8rv" podStartSLOduration=118.854450747 podStartE2EDuration="1m58.854450747s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:49.822676407 +0000 UTC m=+141.903613721" watchObservedRunningTime="2026-02-14 04:11:49.854450747 +0000 UTC m=+141.935388061" Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.872863 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:49 crc kubenswrapper[4867]: E0214 04:11:49.873203 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:50.373187205 +0000 UTC m=+142.454124519 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.973968 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:49 crc kubenswrapper[4867]: E0214 04:11:49.974137 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:50.474112988 +0000 UTC m=+142.555050302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:49 crc kubenswrapper[4867]: I0214 04:11:49.974216 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:49 crc kubenswrapper[4867]: E0214 04:11:49.974497 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:50.474486058 +0000 UTC m=+142.555423422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.075182 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:50 crc kubenswrapper[4867]: E0214 04:11:50.077072 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:50.577044973 +0000 UTC m=+142.657982357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.182921 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:50 crc kubenswrapper[4867]: E0214 04:11:50.183325 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:50.683311082 +0000 UTC m=+142.764248406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.220324 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8bmcr" podStartSLOduration=119.220304105 podStartE2EDuration="1m59.220304105s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:50.103488637 +0000 UTC m=+142.184425961" watchObservedRunningTime="2026-02-14 04:11:50.220304105 +0000 UTC m=+142.301241439" Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.250764 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" podStartSLOduration=119.250743311 podStartE2EDuration="1m59.250743311s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:50.248786441 +0000 UTC m=+142.329723755" watchObservedRunningTime="2026-02-14 04:11:50.250743311 +0000 UTC m=+142.331680625" Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.287939 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:50 crc kubenswrapper[4867]: E0214 04:11:50.288036 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:50.788020952 +0000 UTC m=+142.868958266 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.288316 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:50 crc kubenswrapper[4867]: E0214 04:11:50.288604 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:50.788596146 +0000 UTC m=+142.869533460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.310694 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-c4c52" podStartSLOduration=120.310677829 podStartE2EDuration="2m0.310677829s" podCreationTimestamp="2026-02-14 04:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:50.306712168 +0000 UTC m=+142.387649482" watchObservedRunningTime="2026-02-14 04:11:50.310677829 +0000 UTC m=+142.391615143" Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.353184 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" podStartSLOduration=119.353167603 podStartE2EDuration="1m59.353167603s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:50.328491514 +0000 UTC m=+142.409428828" watchObservedRunningTime="2026-02-14 04:11:50.353167603 +0000 UTC m=+142.434104917" Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.398840 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:50 crc kubenswrapper[4867]: E0214 04:11:50.398933 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:50.898909789 +0000 UTC m=+142.979847103 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.399219 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:50 crc kubenswrapper[4867]: E0214 04:11:50.399579 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:50.899567666 +0000 UTC m=+142.980504970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.401454 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" event={"ID":"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2","Type":"ContainerStarted","Data":"0b46292ee8547b3f863b2a98bb8fb2cf8703a9757ad76735d9fe0ebd6ef2ffbd"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.404013 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" event={"ID":"1b196c26-84a1-408f-913b-eb50572102cf","Type":"ContainerStarted","Data":"bc81bfa7a43c3207c40e6706fb2fd31e8a1cd427a12e1e87a713601dd9213e3b"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.423343 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gc8sl" event={"ID":"541a6523-92f6-477b-9d35-a3a0074f5de3","Type":"ContainerStarted","Data":"7359c9966fb493273fc78879abb8bc048cba601f71bd1221d1053d939eaff9ef"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.440726 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" event={"ID":"46664b60-c0df-4869-9304-cec4de385a86","Type":"ContainerStarted","Data":"f397ed60c1c846321f943b10609443e3f5bd17a9c6dd2ecf373fb19774fdd18f"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.443269 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5" event={"ID":"ec0d5c79-9e98-4f09-a336-9c284ba81d82","Type":"ContainerStarted","Data":"d1780c3399c6c4ac3170d23e77834088498a3ff63bc91665ba42c8a13e3d4fbb"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.445381 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-sz8l8" event={"ID":"0d05475f-b787-49dc-8a0b-c98e47f40a3b","Type":"ContainerStarted","Data":"65216933a12d5c19e8bc55d0d569c235523b6dbe22cd57d5990271ce4e425222"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.446247 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8ftf5" event={"ID":"d3658855-0c06-490f-9bcc-33de7069178e","Type":"ContainerStarted","Data":"5613c8dd19ecd64e0d1180d68287ca020cc270c9863eb29760e0d932df960c3a"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.447120 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" event={"ID":"b1dba42c-e410-49fd-8c48-449fca5d65dc","Type":"ContainerStarted","Data":"2505211cfa615779d9f8e3b0b78e975c0737917c367dbe131808e6bc917ecd9d"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.448538 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-9kgzh" event={"ID":"d74f081b-fe53-4642-8340-a8e602c627f1","Type":"ContainerStarted","Data":"6bbf18ce5c812e12910a07b611ffec21ee29c37d3d1a406668755058c6f086a2"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.452680 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97" event={"ID":"dd4dbaf5-45ee-4171-b6b9-7deba44931ff","Type":"ContainerStarted","Data":"66d11694151f7873d16f9b3dbc561e7d675ca9fa539f15e70d22c62627ee1279"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.459172 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qlkzp" event={"ID":"4b71d414-e6bf-4f51-a808-1938c1edf207","Type":"ContainerStarted","Data":"ca0c26a1e9b7b16e001e49e5ddce44e9963632069c1c51977ac55d694d506ff1"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.461054 4867 generic.go:334] "Generic (PLEG): container finished" podID="d1f6fd76-f362-495f-969d-a644f072552f" containerID="66fbea02ea2b5f3c6ffdf61d25eeeee17d6b58bd4bb90aedfb7b5388f306f2b1" exitCode=0 Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.461111 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" event={"ID":"d1f6fd76-f362-495f-969d-a644f072552f","Type":"ContainerDied","Data":"66fbea02ea2b5f3c6ffdf61d25eeeee17d6b58bd4bb90aedfb7b5388f306f2b1"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.466160 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5k4wz" event={"ID":"9a16b0f1-4ef6-457a-a766-a0cc2181501f","Type":"ContainerStarted","Data":"0f4ffcd9be28b010cbb3f90f45a501cb69ec2cbb557453e40c94ee2eaabe1408"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.467261 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" event={"ID":"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a","Type":"ContainerStarted","Data":"e4ca5c9cce4b1a413dbb012e458367afc39bde8f3194baa1bce21c05bfa3d89d"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.487804 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" event={"ID":"a0c7654d-1553-4b68-8af4-253f77d7c657","Type":"ContainerStarted","Data":"b0126d8e37d5f7cc69f3c939759dd77b3373c63949068219c15168a6526dc330"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.496062 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-htv2n" event={"ID":"dc723269-8ee6-4236-9eaa-169a00d76442","Type":"ContainerStarted","Data":"0048178c63d05d01b42d22de443716f1298cccafc53f9294b614ff7f1612f71a"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.499104 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" event={"ID":"7cedc5a6-929b-43ca-a8b0-6dca555ca455","Type":"ContainerStarted","Data":"e03a7a990f5400e00c868e6bf732598ed46ee2c93e55a4f998fa09c139acce06"} Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.501823 4867 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-pctg8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.501854 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" podUID="07dd9173-fdfe-4edb-821b-37c94116b53e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.502126 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.502145 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.504355 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:50 crc kubenswrapper[4867]: E0214 04:11:50.505493 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:51.005475616 +0000 UTC m=+143.086412930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.607981 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:50 crc kubenswrapper[4867]: E0214 04:11:50.615745 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:51.115721757 +0000 UTC m=+143.196659071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.709709 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:50 crc kubenswrapper[4867]: E0214 04:11:50.710194 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:51.210179345 +0000 UTC m=+143.291116659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.811907 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:50 crc kubenswrapper[4867]: E0214 04:11:50.812300 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:51.312286099 +0000 UTC m=+143.393223413 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:50 crc kubenswrapper[4867]: I0214 04:11:50.912990 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:50 crc kubenswrapper[4867]: E0214 04:11:50.913493 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:51.413474388 +0000 UTC m=+143.494411702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.014558 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:51 crc kubenswrapper[4867]: E0214 04:11:51.014873 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:51.514861264 +0000 UTC m=+143.595798578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.115979 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:51 crc kubenswrapper[4867]: E0214 04:11:51.116133 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:51.616115665 +0000 UTC m=+143.697052979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.116210 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:51 crc kubenswrapper[4867]: E0214 04:11:51.116527 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:51.616501215 +0000 UTC m=+143.697438529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.217233 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:51 crc kubenswrapper[4867]: E0214 04:11:51.217449 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:51.717424608 +0000 UTC m=+143.798361922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.217690 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:51 crc kubenswrapper[4867]: E0214 04:11:51.218032 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:51.718018603 +0000 UTC m=+143.798955917 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.318524 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:51 crc kubenswrapper[4867]: E0214 04:11:51.318834 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:51.818819733 +0000 UTC m=+143.899757037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.420617 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:51 crc kubenswrapper[4867]: E0214 04:11:51.421090 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:51.92107552 +0000 UTC m=+144.002012854 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.457679 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.507776 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" event={"ID":"a0c7654d-1553-4b68-8af4-253f77d7c657","Type":"ContainerStarted","Data":"dc6b34c0a2b6b91075fb741871027a4a30faaff955391c22fdb83614576be619"} Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.508911 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-l6gq7" event={"ID":"a9bcb9a2-1128-4c6b-80b1-47afd1a46511","Type":"ContainerStarted","Data":"6a9fdef78d2c3532db91530b9b0f923268929b942fc13d36c12fa391ad6c39d5"} Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.509850 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f47sx" event={"ID":"89db71f1-1a8b-4c57-9a3d-eb725060aee9","Type":"ContainerStarted","Data":"32f54270e4bbc4ffd262bfa8f6df761c3f4f277c90d8ea5a8e2f59467a048f45"} Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.511881 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" event={"ID":"8437deca-adf5-4648-9abe-2c1c6376d07b","Type":"ContainerStarted","Data":"5f164b53316141d80833dd0afb26eb9682abfcc6f23401e2fa506cbf27329a34"} Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.514970 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pmlgc" event={"ID":"acdb1323-fec8-46fa-9f36-9b0f7f74cca4","Type":"ContainerStarted","Data":"90e24cbafd8c59084ad3aa234e814bb76c7cc62e3f4fd2f231f08f478ee21fe0"} Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.516144 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rxprp" event={"ID":"02d4609f-f699-4ac2-bc41-752b879681ba","Type":"ContainerStarted","Data":"ee04e324663f8fc4b82cb8b67e9abaf1041eff947957b2c683b5e82e076739c3"} Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.521802 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:51 crc kubenswrapper[4867]: E0214 04:11:51.522170 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:52.022155548 +0000 UTC m=+144.103092862 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.531725 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx" event={"ID":"ccd97956-aef1-45cf-9475-02928c866124","Type":"ContainerStarted","Data":"ff1d2840fe467c400dc900559308f5e595fcd476b4c879a9698aeab0690fa07f"} Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.532712 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" event={"ID":"22c4dfcc-144e-40cd-bed2-dc28c210a130","Type":"ContainerStarted","Data":"89cc21aca7d7ce86585c86f456df154687acf8be7a8390235ac7c35d06f5ef7f"} Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.534664 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" event={"ID":"14efaf39-985f-45ea-ab79-0b8b2044c7f7","Type":"ContainerStarted","Data":"ffdcb8b4f0119bbfa4081845fbe7d22aac75e8abd20c4cfd6d4121782f9269ad"} Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.535275 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.536917 4867 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-29p6h container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.536950 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" podUID="14efaf39-985f-45ea-ab79-0b8b2044c7f7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.538353 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" event={"ID":"1fd832b4-de40-4266-93fb-3682eeb9dd3e","Type":"ContainerStarted","Data":"1fbdab536832bc3ffe63dca56aee4c29e70508ec0e812efabd142713405560ce"} Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.539797 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" event={"ID":"0ad7b333-6328-41ea-a81d-bce9790b185a","Type":"ContainerStarted","Data":"271deed38181d3d03a61bb60c701b3fc845d6907348df479c58ecd82b90d57ea"} Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.540528 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.541886 4867 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-c65kr container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" start-of-body= Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.541918 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" podUID="0ad7b333-6328-41ea-a81d-bce9790b185a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.547461 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f47sx" podStartSLOduration=120.547446412 podStartE2EDuration="2m0.547446412s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:51.544751914 +0000 UTC m=+143.625689218" watchObservedRunningTime="2026-02-14 04:11:51.547446412 +0000 UTC m=+143.628383726" Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.548894 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" event={"ID":"6a8f75ff-3558-4d7b-8adb-722a732d0633","Type":"ContainerStarted","Data":"304bdadfd0110c34cc762c32f0da538f1989fb2efbd08e2faec1ba1b223f466d"} Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.551162 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct" event={"ID":"6d8ea50d-6822-425a-8eac-6311c8537eb7","Type":"ContainerStarted","Data":"53f2d770e25766aa294ce2cd51e6fda4ecaed6b876043de753f867fb66dc79d7"} Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.551891 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.553596 4867 patch_prober.go:28] interesting pod/console-operator-58897d9998-htv2n container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.553629 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-htv2n" podUID="dc723269-8ee6-4236-9eaa-169a00d76442" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.578032 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-ccg6j" podStartSLOduration=120.578017332 podStartE2EDuration="2m0.578017332s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:51.577742335 +0000 UTC m=+143.658679659" watchObservedRunningTime="2026-02-14 04:11:51.578017332 +0000 UTC m=+143.658954646" Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.617775 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" podStartSLOduration=120.617758275 podStartE2EDuration="2m0.617758275s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:51.617657443 +0000 UTC m=+143.698594757" watchObservedRunningTime="2026-02-14 04:11:51.617758275 +0000 UTC m=+143.698695589" Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.624150 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:51 crc kubenswrapper[4867]: E0214 04:11:51.625988 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:52.125976055 +0000 UTC m=+144.206913369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.675115 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" podStartSLOduration=120.675094317 podStartE2EDuration="2m0.675094317s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:51.671407363 +0000 UTC m=+143.752344687" watchObservedRunningTime="2026-02-14 04:11:51.675094317 +0000 UTC m=+143.756031631" Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.680395 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rxprp" podStartSLOduration=120.680367451 podStartE2EDuration="2m0.680367451s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:51.649041843 +0000 UTC m=+143.729979157" watchObservedRunningTime="2026-02-14 04:11:51.680367451 +0000 UTC m=+143.761304825" Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.720331 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-886ct" podStartSLOduration=120.72031596 podStartE2EDuration="2m0.72031596s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:51.701459809 +0000 UTC m=+143.782397123" watchObservedRunningTime="2026-02-14 04:11:51.72031596 +0000 UTC m=+143.801253274" Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.721760 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-sz8l8" podStartSLOduration=7.721755327 podStartE2EDuration="7.721755327s" podCreationTimestamp="2026-02-14 04:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:51.720217738 +0000 UTC m=+143.801155052" watchObservedRunningTime="2026-02-14 04:11:51.721755327 +0000 UTC m=+143.802692641" Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.726095 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:51 crc kubenswrapper[4867]: E0214 04:11:51.727906 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:52.227884533 +0000 UTC m=+144.308821847 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.745284 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-htv2n" podStartSLOduration=121.745266476 podStartE2EDuration="2m1.745266476s" podCreationTimestamp="2026-02-14 04:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:51.74502355 +0000 UTC m=+143.825960864" watchObservedRunningTime="2026-02-14 04:11:51.745266476 +0000 UTC m=+143.826203790" Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.827427 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:51 crc kubenswrapper[4867]: E0214 04:11:51.828326 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:52.328296723 +0000 UTC m=+144.409234047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:51 crc kubenswrapper[4867]: I0214 04:11:51.928241 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:51 crc kubenswrapper[4867]: E0214 04:11:51.928612 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:52.42858897 +0000 UTC m=+144.509526284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.032169 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:52 crc kubenswrapper[4867]: E0214 04:11:52.032582 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:52.532566251 +0000 UTC m=+144.613503565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.132955 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:52 crc kubenswrapper[4867]: E0214 04:11:52.133095 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:52.633071784 +0000 UTC m=+144.714009088 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.133201 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:52 crc kubenswrapper[4867]: E0214 04:11:52.133540 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:52.633532236 +0000 UTC m=+144.714469550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.234074 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:52 crc kubenswrapper[4867]: E0214 04:11:52.234427 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:52.734412768 +0000 UTC m=+144.815350082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.336015 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:52 crc kubenswrapper[4867]: E0214 04:11:52.336538 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:52.83649212 +0000 UTC m=+144.917429434 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.437315 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:52 crc kubenswrapper[4867]: E0214 04:11:52.437692 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:52.93767847 +0000 UTC m=+145.018615784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.539587 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:52 crc kubenswrapper[4867]: E0214 04:11:52.539985 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:53.039972248 +0000 UTC m=+145.120909572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.561591 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5" event={"ID":"ec0d5c79-9e98-4f09-a336-9c284ba81d82","Type":"ContainerStarted","Data":"900bb8b6bc424bcc0d4213869ba2132ba509a17f54cb6d5786f79ffa8f2ff01a"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.564521 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5k4wz" event={"ID":"9a16b0f1-4ef6-457a-a766-a0cc2181501f","Type":"ContainerStarted","Data":"e4e60affe86a35fc1b3546c424ffe18fb73433fa54f7e1f2f48230d3938cb514"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.566534 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" event={"ID":"1261994f-a993-4ffc-851a-dfce5bcc10b1","Type":"ContainerStarted","Data":"ad2ddf4680e69f0a913bce1ff89fea465b130eba23650d73e98a805b35546172"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.570894 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-9kgzh" event={"ID":"d74f081b-fe53-4642-8340-a8e602c627f1","Type":"ContainerStarted","Data":"8e1c452b54860770b53ac4d26fe606d56a8da1c4532f5ebb807da0e51ca4911a"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.577978 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" event={"ID":"d58c6e7c-e0bc-4833-ab34-348c03f75da7","Type":"ContainerStarted","Data":"c8e82d7f6512b2d6b5c03b51ba8a2b0d813ac1588b43a82d35118815f7fec1a7"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.581448 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qlkzp" event={"ID":"4b71d414-e6bf-4f51-a808-1938c1edf207","Type":"ContainerStarted","Data":"d6f9a4aceb60429befbb079eda354a35872f1921b3ba953e54763f01e9e1d148"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.582875 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gc8sl" event={"ID":"541a6523-92f6-477b-9d35-a3a0074f5de3","Type":"ContainerStarted","Data":"6d4453329edd29451bae0a09af90381f5e724a96b41cd88fd8fce385eb3b0938"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.584652 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-l6gq7" event={"ID":"a9bcb9a2-1128-4c6b-80b1-47afd1a46511","Type":"ContainerStarted","Data":"1abbdcf648a7bfd0096b2c9b5b18705a13408f9f258027c631990c9d23109908"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.587444 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" event={"ID":"46664b60-c0df-4869-9304-cec4de385a86","Type":"ContainerStarted","Data":"6ff2ed29a3b77b2481e62c7a269a418387c210dfacd8443a4552d6a8773dde4c"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.588180 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.590975 4867 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tcss9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.591020 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" podUID="46664b60-c0df-4869-9304-cec4de385a86" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.594849 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" event={"ID":"8437deca-adf5-4648-9abe-2c1c6376d07b","Type":"ContainerStarted","Data":"adcf037effe8823e62cc635472c88504b24b940b865e88039d35e39c4e81f334"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.595749 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6tvm5" podStartSLOduration=121.59573005 podStartE2EDuration="2m1.59573005s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:52.594639742 +0000 UTC m=+144.675577056" watchObservedRunningTime="2026-02-14 04:11:52.59573005 +0000 UTC m=+144.676667364" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.597428 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" event={"ID":"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a","Type":"ContainerStarted","Data":"aa8fea275ce5bfacf3d08b45c45e75a0934c35dd23257fef4ead33c26bfccaa6"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.598705 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" event={"ID":"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2","Type":"ContainerStarted","Data":"51dd7926e1bc9104319614773b3ee71539ad753d4fb48a3fd7a135d20615274f"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.599598 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.601142 4867 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mkw9h container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.601190 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" podUID="0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.602085 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" event={"ID":"1b196c26-84a1-408f-913b-eb50572102cf","Type":"ContainerStarted","Data":"c943db06330ddf72b1ccef3b0bef6de1e4225825a436a45e341b66e82e44cf32"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.602687 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.604451 4867 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-s94ht container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" start-of-body= Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.604497 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" podUID="1b196c26-84a1-408f-913b-eb50572102cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.604772 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pmlgc" event={"ID":"acdb1323-fec8-46fa-9f36-9b0f7f74cca4","Type":"ContainerStarted","Data":"6a0a494f29ffa335720d7960fce257fdb4789ba5266a571250856c1caa1d4139"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.606165 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" event={"ID":"b1dba42c-e410-49fd-8c48-449fca5d65dc","Type":"ContainerStarted","Data":"1c2f18b80eabbfd8f9faa98d372c322248253795be83a6d80562b3ec3e4cc570"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.606216 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.607643 4867 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dgp2v container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.607673 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" podUID="b1dba42c-e410-49fd-8c48-449fca5d65dc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.609067 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-t8bst" event={"ID":"0ccfed17-f056-4bbe-8ec3-cdd31f37be63","Type":"ContainerStarted","Data":"c6c39938bfb9f99f937a0fc65d181fea0eb1da601b9f5674b7e62e146b7e19eb"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.619109 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" event={"ID":"894233bb-65ed-4cdd-ac61-7a8bd8f66140","Type":"ContainerStarted","Data":"67c1e7d10b3abf8fcc8deed18cda3d4daabcb2d1f501d3cd9da57cd0242ef6c3"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.620474 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97" event={"ID":"dd4dbaf5-45ee-4171-b6b9-7deba44931ff","Type":"ContainerStarted","Data":"0afe2d8ca5740eb65cbdba4d5b86f18abf64813249d78245a81d0c7fae76c57d"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.621814 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" podStartSLOduration=121.621799785 podStartE2EDuration="2m1.621799785s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:52.618731047 +0000 UTC m=+144.699668361" watchObservedRunningTime="2026-02-14 04:11:52.621799785 +0000 UTC m=+144.702737099" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.625265 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" event={"ID":"d1f6fd76-f362-495f-969d-a644f072552f","Type":"ContainerStarted","Data":"82b37a1a0a51ba5be1a38f645454c34b41d59a7c8c5d04f87682e4e4b69cd548"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.625411 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.627304 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" event={"ID":"1fd832b4-de40-4266-93fb-3682eeb9dd3e","Type":"ContainerStarted","Data":"3bb257dbc0b7e413b76e942b1666e5f7fbceaca7b423608496b33ebb41a122d7"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.632535 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8ftf5" event={"ID":"d3658855-0c06-490f-9bcc-33de7069178e","Type":"ContainerStarted","Data":"31ebad694423c3f7c2ca5e7854062b07fbed0bf71eb51ec69427bf63965f12f7"} Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.635383 4867 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-29p6h container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.635416 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" podUID="14efaf39-985f-45ea-ab79-0b8b2044c7f7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.635468 4867 patch_prober.go:28] interesting pod/console-operator-58897d9998-htv2n container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.635532 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-htv2n" podUID="dc723269-8ee6-4236-9eaa-169a00d76442" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.635674 4867 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-c65kr container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" start-of-body= Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.635845 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" podUID="0ad7b333-6328-41ea-a81d-bce9790b185a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.643356 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:52 crc kubenswrapper[4867]: E0214 04:11:52.644007 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:53.14398836 +0000 UTC m=+145.224925674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.644528 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-9kgzh" podStartSLOduration=121.644517754 podStartE2EDuration="2m1.644517754s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:52.642879512 +0000 UTC m=+144.723816836" watchObservedRunningTime="2026-02-14 04:11:52.644517754 +0000 UTC m=+144.725455068" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.676601 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-l6gq7" podStartSLOduration=121.676582241 podStartE2EDuration="2m1.676582241s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:52.676203842 +0000 UTC m=+144.757141156" watchObservedRunningTime="2026-02-14 04:11:52.676582241 +0000 UTC m=+144.757519555" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.713726 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" podStartSLOduration=121.713706978 podStartE2EDuration="2m1.713706978s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:52.705303614 +0000 UTC m=+144.786240938" watchObservedRunningTime="2026-02-14 04:11:52.713706978 +0000 UTC m=+144.794644302" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.731925 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5kv6p" podStartSLOduration=122.731907122 podStartE2EDuration="2m2.731907122s" podCreationTimestamp="2026-02-14 04:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:52.731304697 +0000 UTC m=+144.812242011" watchObservedRunningTime="2026-02-14 04:11:52.731907122 +0000 UTC m=+144.812844436" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.750556 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-qlkzp" podStartSLOduration=121.750540177 podStartE2EDuration="2m1.750540177s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:52.75026902 +0000 UTC m=+144.831206334" watchObservedRunningTime="2026-02-14 04:11:52.750540177 +0000 UTC m=+144.831477491" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.750926 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:52 crc kubenswrapper[4867]: E0214 04:11:52.762006 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:53.261989169 +0000 UTC m=+145.342926543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.820135 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-5k4wz" podStartSLOduration=121.820121031 podStartE2EDuration="2m1.820121031s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:52.789367287 +0000 UTC m=+144.870304601" watchObservedRunningTime="2026-02-14 04:11:52.820121031 +0000 UTC m=+144.901058335" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.849873 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" podStartSLOduration=122.849851739 podStartE2EDuration="2m2.849851739s" podCreationTimestamp="2026-02-14 04:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:52.847863099 +0000 UTC m=+144.928800413" watchObservedRunningTime="2026-02-14 04:11:52.849851739 +0000 UTC m=+144.930789053" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.850494 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" podStartSLOduration=121.850487926 podStartE2EDuration="2m1.850487926s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:52.821086505 +0000 UTC m=+144.902023819" watchObservedRunningTime="2026-02-14 04:11:52.850487926 +0000 UTC m=+144.931425230" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.870286 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:52 crc kubenswrapper[4867]: E0214 04:11:52.870997 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:53.370957647 +0000 UTC m=+145.451894961 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.890667 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-t8bst" podStartSLOduration=121.890646699 podStartE2EDuration="2m1.890646699s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:52.888988127 +0000 UTC m=+144.969925441" watchObservedRunningTime="2026-02-14 04:11:52.890646699 +0000 UTC m=+144.971584013" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.921008 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-485km" podStartSLOduration=121.920994693 podStartE2EDuration="2m1.920994693s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:52.919999188 +0000 UTC m=+145.000936502" watchObservedRunningTime="2026-02-14 04:11:52.920994693 +0000 UTC m=+145.001932007" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.953682 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pmlgc" podStartSLOduration=122.953663826 podStartE2EDuration="2m2.953663826s" podCreationTimestamp="2026-02-14 04:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:52.952173018 +0000 UTC m=+145.033110332" watchObservedRunningTime="2026-02-14 04:11:52.953663826 +0000 UTC m=+145.034601140" Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.975198 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:52 crc kubenswrapper[4867]: E0214 04:11:52.975601 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:53.475580025 +0000 UTC m=+145.556517339 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:52 crc kubenswrapper[4867]: I0214 04:11:52.979850 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-szcmx" podStartSLOduration=121.979836133 podStartE2EDuration="2m1.979836133s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:52.978023607 +0000 UTC m=+145.058960931" watchObservedRunningTime="2026-02-14 04:11:52.979836133 +0000 UTC m=+145.060773437" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.016961 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" podStartSLOduration=122.01694575 podStartE2EDuration="2m2.01694575s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:53.015193035 +0000 UTC m=+145.096130349" watchObservedRunningTime="2026-02-14 04:11:53.01694575 +0000 UTC m=+145.097883054" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.038839 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-wcdc2" podStartSLOduration=122.038821977 podStartE2EDuration="2m2.038821977s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:53.037585206 +0000 UTC m=+145.118522520" watchObservedRunningTime="2026-02-14 04:11:53.038821977 +0000 UTC m=+145.119759291" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.075876 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:53 crc kubenswrapper[4867]: E0214 04:11:53.076136 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:53.576121459 +0000 UTC m=+145.657058773 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.103835 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.116850 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.116901 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.152807 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-699tj" podStartSLOduration=122.152790193 podStartE2EDuration="2m2.152790193s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:53.152181078 +0000 UTC m=+145.233118392" watchObservedRunningTime="2026-02-14 04:11:53.152790193 +0000 UTC m=+145.233727507" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.153599 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" podStartSLOduration=123.153592804 podStartE2EDuration="2m3.153592804s" podCreationTimestamp="2026-02-14 04:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:53.093015009 +0000 UTC m=+145.173952323" watchObservedRunningTime="2026-02-14 04:11:53.153592804 +0000 UTC m=+145.234530118" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.177107 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:53 crc kubenswrapper[4867]: E0214 04:11:53.177450 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:53.677435221 +0000 UTC m=+145.758372535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.179546 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" podStartSLOduration=122.179533415 podStartE2EDuration="2m2.179533415s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:53.179086774 +0000 UTC m=+145.260024088" watchObservedRunningTime="2026-02-14 04:11:53.179533415 +0000 UTC m=+145.260470729" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.257326 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-8ftf5" podStartSLOduration=9.257309448 podStartE2EDuration="9.257309448s" podCreationTimestamp="2026-02-14 04:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:53.218921159 +0000 UTC m=+145.299858473" watchObservedRunningTime="2026-02-14 04:11:53.257309448 +0000 UTC m=+145.338246762" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.257710 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t6c97" podStartSLOduration=122.257706508 podStartE2EDuration="2m2.257706508s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:53.256788765 +0000 UTC m=+145.337726079" watchObservedRunningTime="2026-02-14 04:11:53.257706508 +0000 UTC m=+145.338643822" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.278489 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:53 crc kubenswrapper[4867]: E0214 04:11:53.278642 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:53.778612521 +0000 UTC m=+145.859549845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.278898 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:53 crc kubenswrapper[4867]: E0214 04:11:53.279334 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:53.779315479 +0000 UTC m=+145.860252853 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.379617 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:53 crc kubenswrapper[4867]: E0214 04:11:53.379820 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:53.87979581 +0000 UTC m=+145.960733124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.380065 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:53 crc kubenswrapper[4867]: E0214 04:11:53.380335 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:53.880324023 +0000 UTC m=+145.961261337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.481138 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:53 crc kubenswrapper[4867]: E0214 04:11:53.481454 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:53.981429621 +0000 UTC m=+146.062366935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.582781 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:53 crc kubenswrapper[4867]: E0214 04:11:53.583074 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:54.083063302 +0000 UTC m=+146.164000616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.639453 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gc8sl" event={"ID":"541a6523-92f6-477b-9d35-a3a0074f5de3","Type":"ContainerStarted","Data":"6bbbbeedff53f1e49ea9cd3f79ae63d75d6bc0433fb4e9f819daa726730735e0"} Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.639669 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-gc8sl" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.641414 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" event={"ID":"894233bb-65ed-4cdd-ac61-7a8bd8f66140","Type":"ContainerStarted","Data":"4823dc08f4332c870bc0a784be9acf6b08614d27f9fcc58f84d0a6d513455976"} Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.643773 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" event={"ID":"a0c7654d-1553-4b68-8af4-253f77d7c657","Type":"ContainerStarted","Data":"a3c4bddbff04cdcab7e0f56ecaa633a0e493e61f17878482d74e1ba56c884806"} Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.643809 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.656349 4867 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mkw9h container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.656400 4867 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dgp2v container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.656416 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" podUID="0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.656419 4867 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-c65kr container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" start-of-body= Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.656466 4867 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-29p6h container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.656470 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" podUID="b1dba42c-e410-49fd-8c48-449fca5d65dc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.656524 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" podUID="14efaf39-985f-45ea-ab79-0b8b2044c7f7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.656485 4867 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-s94ht container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" start-of-body= Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.656376 4867 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tcss9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.656573 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" podUID="46664b60-c0df-4869-9304-cec4de385a86" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.656574 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" podUID="1b196c26-84a1-408f-913b-eb50572102cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.656473 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" podUID="0ad7b333-6328-41ea-a81d-bce9790b185a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.676686 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-gc8sl" podStartSLOduration=9.676668029 podStartE2EDuration="9.676668029s" podCreationTimestamp="2026-02-14 04:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:53.676135965 +0000 UTC m=+145.757073279" watchObservedRunningTime="2026-02-14 04:11:53.676668029 +0000 UTC m=+145.757605343" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.688774 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:53 crc kubenswrapper[4867]: E0214 04:11:53.689149 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:54.189136747 +0000 UTC m=+146.270074061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.702424 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" podStartSLOduration=122.702404145 podStartE2EDuration="2m2.702404145s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:53.699796419 +0000 UTC m=+145.780733733" watchObservedRunningTime="2026-02-14 04:11:53.702404145 +0000 UTC m=+145.783341449" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.734810 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" podStartSLOduration=122.734790171 podStartE2EDuration="2m2.734790171s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:53.732613165 +0000 UTC m=+145.813550479" watchObservedRunningTime="2026-02-14 04:11:53.734790171 +0000 UTC m=+145.815727485" Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.791074 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:53 crc kubenswrapper[4867]: E0214 04:11:53.798192 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:54.298176337 +0000 UTC m=+146.379113741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.897012 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:53 crc kubenswrapper[4867]: E0214 04:11:53.897115 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:54.397099379 +0000 UTC m=+146.478036693 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.897362 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:53 crc kubenswrapper[4867]: E0214 04:11:53.897639 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:54.397632813 +0000 UTC m=+146.478570127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.999309 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:53 crc kubenswrapper[4867]: E0214 04:11:53.999558 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:54.499530131 +0000 UTC m=+146.580467455 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:53 crc kubenswrapper[4867]: I0214 04:11:53.999703 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:54 crc kubenswrapper[4867]: E0214 04:11:54.000182 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:54.500172057 +0000 UTC m=+146.581109381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.101372 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:54 crc kubenswrapper[4867]: E0214 04:11:54.101756 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:54.601738147 +0000 UTC m=+146.682675471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.104143 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.104196 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.202844 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:54 crc kubenswrapper[4867]: E0214 04:11:54.203293 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:54.703272196 +0000 UTC m=+146.784209580 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.307891 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:54 crc kubenswrapper[4867]: E0214 04:11:54.308037 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:54.808019946 +0000 UTC m=+146.888957260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.308189 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:54 crc kubenswrapper[4867]: E0214 04:11:54.308394 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:54.808387175 +0000 UTC m=+146.889324489 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.409262 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:54 crc kubenswrapper[4867]: E0214 04:11:54.409380 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:54.90935496 +0000 UTC m=+146.990292274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.409800 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:54 crc kubenswrapper[4867]: E0214 04:11:54.410115 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:54.910103669 +0000 UTC m=+146.991040973 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.511192 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:54 crc kubenswrapper[4867]: E0214 04:11:54.511557 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:55.011540345 +0000 UTC m=+147.092477659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.612611 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:54 crc kubenswrapper[4867]: E0214 04:11:54.612945 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:55.11293399 +0000 UTC m=+147.193871294 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.654495 4867 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mkw9h container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.654793 4867 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tcss9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.654827 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" podUID="46664b60-c0df-4869-9304-cec4de385a86" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.654954 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" podUID="0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.655200 4867 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-s94ht container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" start-of-body= Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.655259 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" podUID="1b196c26-84a1-408f-913b-eb50572102cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.713361 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:54 crc kubenswrapper[4867]: E0214 04:11:54.713573 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:55.213555336 +0000 UTC m=+147.294492650 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.713675 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:54 crc kubenswrapper[4867]: E0214 04:11:54.713999 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:55.213986287 +0000 UTC m=+147.294923601 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.815917 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:54 crc kubenswrapper[4867]: E0214 04:11:54.816081 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:55.316050069 +0000 UTC m=+147.396987383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.816298 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:54 crc kubenswrapper[4867]: E0214 04:11:54.817224 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:55.317208828 +0000 UTC m=+147.398146142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:54 crc kubenswrapper[4867]: I0214 04:11:54.920070 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:54 crc kubenswrapper[4867]: E0214 04:11:54.920880 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:55.420865651 +0000 UTC m=+147.501802955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.022444 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:55 crc kubenswrapper[4867]: E0214 04:11:55.022737 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:55.522726449 +0000 UTC m=+147.603663763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.115324 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 04:11:55 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Feb 14 04:11:55 crc kubenswrapper[4867]: [+]process-running ok Feb 14 04:11:55 crc kubenswrapper[4867]: healthz check failed Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.115375 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.123161 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:55 crc kubenswrapper[4867]: E0214 04:11:55.123352 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:55.623325993 +0000 UTC m=+147.704263317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.123494 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:55 crc kubenswrapper[4867]: E0214 04:11:55.123773 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:55.623765615 +0000 UTC m=+147.704702929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.224464 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:55 crc kubenswrapper[4867]: E0214 04:11:55.224583 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:55.724562644 +0000 UTC m=+147.805499968 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.224817 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:55 crc kubenswrapper[4867]: E0214 04:11:55.225082 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:55.725073198 +0000 UTC m=+147.806010512 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.325940 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:55 crc kubenswrapper[4867]: E0214 04:11:55.326144 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:55.826118764 +0000 UTC m=+147.907056078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.326288 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:55 crc kubenswrapper[4867]: E0214 04:11:55.326580 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:55.826569465 +0000 UTC m=+147.907506779 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.426873 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:55 crc kubenswrapper[4867]: E0214 04:11:55.427205 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:55.927191521 +0000 UTC m=+148.008128835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.528558 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:55 crc kubenswrapper[4867]: E0214 04:11:55.528904 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:56.028888954 +0000 UTC m=+148.109826268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.629374 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:55 crc kubenswrapper[4867]: E0214 04:11:55.629500 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:56.129476598 +0000 UTC m=+148.210413902 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.629667 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:55 crc kubenswrapper[4867]: E0214 04:11:55.629948 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:56.12993619 +0000 UTC m=+148.210873504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.663005 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" event={"ID":"7cedc5a6-929b-43ca-a8b0-6dca555ca455","Type":"ContainerStarted","Data":"f0ce6046d0ab83b94ac4d4ae21e0e2aee7d12dc8629bf47e4f4767c2b9df51ab"} Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.730740 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:55 crc kubenswrapper[4867]: E0214 04:11:55.730963 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:56.230938115 +0000 UTC m=+148.311875419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.731162 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:55 crc kubenswrapper[4867]: E0214 04:11:55.731484 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:56.231472769 +0000 UTC m=+148.312410083 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.831828 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:55 crc kubenswrapper[4867]: E0214 04:11:55.832239 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:56.332224118 +0000 UTC m=+148.413161432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.847653 4867 csr.go:261] certificate signing request csr-d6v2v is approved, waiting to be issued Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.854130 4867 csr.go:257] certificate signing request csr-d6v2v is issued Feb 14 04:11:55 crc kubenswrapper[4867]: I0214 04:11:55.933883 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:55 crc kubenswrapper[4867]: E0214 04:11:55.934168 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:56.434158067 +0000 UTC m=+148.515095381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.035461 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:56 crc kubenswrapper[4867]: E0214 04:11:56.035676 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:56.535640214 +0000 UTC m=+148.616577528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.035958 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:56 crc kubenswrapper[4867]: E0214 04:11:56.036276 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:56.53626159 +0000 UTC m=+148.617198904 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.108862 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 04:11:56 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Feb 14 04:11:56 crc kubenswrapper[4867]: [+]process-running ok Feb 14 04:11:56 crc kubenswrapper[4867]: healthz check failed Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.108918 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.137707 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:56 crc kubenswrapper[4867]: E0214 04:11:56.137893 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:56.637869061 +0000 UTC m=+148.718806375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.138087 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:56 crc kubenswrapper[4867]: E0214 04:11:56.138420 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:56.638411804 +0000 UTC m=+148.719349118 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.243167 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:56 crc kubenswrapper[4867]: E0214 04:11:56.243317 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:56.743288148 +0000 UTC m=+148.824225482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.243469 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:56 crc kubenswrapper[4867]: E0214 04:11:56.243819 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:56.743809262 +0000 UTC m=+148.824746656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.344105 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:56 crc kubenswrapper[4867]: E0214 04:11:56.344455 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:56.844441238 +0000 UTC m=+148.925378552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.446120 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:56 crc kubenswrapper[4867]: E0214 04:11:56.446396 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:56.946385627 +0000 UTC m=+149.027322941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.546617 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:56 crc kubenswrapper[4867]: E0214 04:11:56.547012 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:57.046998852 +0000 UTC m=+149.127936166 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.648671 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:56 crc kubenswrapper[4867]: E0214 04:11:56.648998 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:57.148987542 +0000 UTC m=+149.229924856 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.750185 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:56 crc kubenswrapper[4867]: E0214 04:11:56.750608 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:57.250593123 +0000 UTC m=+149.331530437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.852195 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:56 crc kubenswrapper[4867]: E0214 04:11:56.852586 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:57.352570823 +0000 UTC m=+149.433508137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.855445 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-14 04:06:55 +0000 UTC, rotation deadline is 2027-01-02 23:44:14.270951759 +0000 UTC Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.855466 4867 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7747h32m17.415488442s for next certificate rotation Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.908550 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.908597 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.919464 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.919535 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.919543 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.919560 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.923208 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.953394 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:56 crc kubenswrapper[4867]: E0214 04:11:56.953559 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:57.453541396 +0000 UTC m=+149.534478710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:56 crc kubenswrapper[4867]: I0214 04:11:56.953643 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:56 crc kubenswrapper[4867]: E0214 04:11:56.953874 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:57.453866495 +0000 UTC m=+149.534803809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.054572 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:57 crc kubenswrapper[4867]: E0214 04:11:57.054775 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:57.554745287 +0000 UTC m=+149.635682591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.055108 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:57 crc kubenswrapper[4867]: E0214 04:11:57.055384 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:57.555376583 +0000 UTC m=+149.636313897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.087785 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.088627 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.093157 4867 patch_prober.go:28] interesting pod/apiserver-76f77b778f-8qkg2 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.11:8443/livez\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.093200 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" podUID="894233bb-65ed-4cdd-ac61-7a8bd8f66140" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.11:8443/livez\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.095976 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.110809 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 04:11:57 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Feb 14 04:11:57 crc kubenswrapper[4867]: [+]process-running ok Feb 14 04:11:57 crc kubenswrapper[4867]: healthz check failed Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.110857 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.155580 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.156969 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:57 crc kubenswrapper[4867]: E0214 04:11:57.157066 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:57.657053575 +0000 UTC m=+149.737990889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.158126 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:57 crc kubenswrapper[4867]: E0214 04:11:57.158728 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:57.658720598 +0000 UTC m=+149.739657912 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.259112 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:57 crc kubenswrapper[4867]: E0214 04:11:57.259279 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:57.759260551 +0000 UTC m=+149.840197865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.259417 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:57 crc kubenswrapper[4867]: E0214 04:11:57.260747 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:57.760729839 +0000 UTC m=+149.841667153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.283640 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.284227 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.298155 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.298370 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.307312 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.310736 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.311575 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.318821 4867 patch_prober.go:28] interesting pod/console-f9d7485db-c4c52 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.318857 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-c4c52" podUID="bb63883f-65f5-4107-877a-ff786d6c00f9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.361032 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.361285 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/adff5c07-e04d-4412-9e26-a0d00b565646-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"adff5c07-e04d-4412-9e26-a0d00b565646\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.361325 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/adff5c07-e04d-4412-9e26-a0d00b565646-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"adff5c07-e04d-4412-9e26-a0d00b565646\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 04:11:57 crc kubenswrapper[4867]: E0214 04:11:57.361425 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:57.861410686 +0000 UTC m=+149.942348000 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.462063 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.462112 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/adff5c07-e04d-4412-9e26-a0d00b565646-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"adff5c07-e04d-4412-9e26-a0d00b565646\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.462173 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/adff5c07-e04d-4412-9e26-a0d00b565646-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"adff5c07-e04d-4412-9e26-a0d00b565646\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 04:11:57 crc kubenswrapper[4867]: E0214 04:11:57.462456 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:57.962438791 +0000 UTC m=+150.043376105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.463166 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/adff5c07-e04d-4412-9e26-a0d00b565646-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"adff5c07-e04d-4412-9e26-a0d00b565646\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.504329 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/adff5c07-e04d-4412-9e26-a0d00b565646-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"adff5c07-e04d-4412-9e26-a0d00b565646\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.557160 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pctg8"] Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.563738 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:57 crc kubenswrapper[4867]: E0214 04:11:57.563922 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:58.063887708 +0000 UTC m=+150.144825022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.564015 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:57 crc kubenswrapper[4867]: E0214 04:11:57.564281 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:58.064273808 +0000 UTC m=+150.145211122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.632990 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.662275 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.664891 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:57 crc kubenswrapper[4867]: E0214 04:11:57.665257 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:58.165238102 +0000 UTC m=+150.246175416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.687459 4867 generic.go:334] "Generic (PLEG): container finished" podID="71ac31c5-7a3b-4c18-aa9e-c193fa8f778a" containerID="aa8fea275ce5bfacf3d08b45c45e75a0934c35dd23257fef4ead33c26bfccaa6" exitCode=0 Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.687491 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" event={"ID":"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a","Type":"ContainerDied","Data":"aa8fea275ce5bfacf3d08b45c45e75a0934c35dd23257fef4ead33c26bfccaa6"} Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.738782 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" podUID="07dd9173-fdfe-4edb-821b-37c94116b53e" containerName="controller-manager" containerID="cri-o://b5e5c1b68f534cc73bf83368aec1b5b6ddd64d982817b6a68fb05176cffabc6e" gracePeriod=30 Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.738987 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" event={"ID":"7cedc5a6-929b-43ca-a8b0-6dca555ca455","Type":"ContainerStarted","Data":"b67964cbe053fa4b504891f9d1320fbbf85de3580e88f6025eb397bf3a820c3e"} Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.739044 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" event={"ID":"7cedc5a6-929b-43ca-a8b0-6dca555ca455","Type":"ContainerStarted","Data":"30a4d1a2b9a2f97ee6204f6ea64d14f2970f5d990b25c81ad1207f0552e02227"} Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.751587 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.770117 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:57 crc kubenswrapper[4867]: E0214 04:11:57.771061 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:58.27104996 +0000 UTC m=+150.351987274 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.773998 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5mz22"] Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.774924 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.779200 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.805377 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5mz22"] Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.840848 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.863997 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.880831 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.881393 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cf2e46b-a553-4b29-b6f2-02072b8660d9-utilities\") pod \"certified-operators-5mz22\" (UID: \"4cf2e46b-a553-4b29-b6f2-02072b8660d9\") " pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.881467 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cf2e46b-a553-4b29-b6f2-02072b8660d9-catalog-content\") pod \"certified-operators-5mz22\" (UID: \"4cf2e46b-a553-4b29-b6f2-02072b8660d9\") " pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.881817 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmwl4\" (UniqueName: \"kubernetes.io/projected/4cf2e46b-a553-4b29-b6f2-02072b8660d9-kube-api-access-rmwl4\") pod \"certified-operators-5mz22\" (UID: \"4cf2e46b-a553-4b29-b6f2-02072b8660d9\") " pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:11:57 crc kubenswrapper[4867]: E0214 04:11:57.882776 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:58.382744398 +0000 UTC m=+150.463681712 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.986566 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.986853 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmwl4\" (UniqueName: \"kubernetes.io/projected/4cf2e46b-a553-4b29-b6f2-02072b8660d9-kube-api-access-rmwl4\") pod \"certified-operators-5mz22\" (UID: \"4cf2e46b-a553-4b29-b6f2-02072b8660d9\") " pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.986897 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cf2e46b-a553-4b29-b6f2-02072b8660d9-utilities\") pod \"certified-operators-5mz22\" (UID: \"4cf2e46b-a553-4b29-b6f2-02072b8660d9\") " pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:11:57 crc kubenswrapper[4867]: I0214 04:11:57.986960 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cf2e46b-a553-4b29-b6f2-02072b8660d9-catalog-content\") pod \"certified-operators-5mz22\" (UID: \"4cf2e46b-a553-4b29-b6f2-02072b8660d9\") " pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:11:57 crc kubenswrapper[4867]: E0214 04:11:57.987856 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:58.487842707 +0000 UTC m=+150.568780021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.008270 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8vs6k"] Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.009431 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.016879 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.028532 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmwl4\" (UniqueName: \"kubernetes.io/projected/4cf2e46b-a553-4b29-b6f2-02072b8660d9-kube-api-access-rmwl4\") pod \"certified-operators-5mz22\" (UID: \"4cf2e46b-a553-4b29-b6f2-02072b8660d9\") " pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.033263 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8vs6k"] Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.059816 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.062231 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.088026 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.088213 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d1c1c6-899d-4220-8f80-defae4ba56f0-utilities\") pod \"community-operators-8vs6k\" (UID: \"b6d1c1c6-899d-4220-8f80-defae4ba56f0\") " pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.088267 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d1c1c6-899d-4220-8f80-defae4ba56f0-catalog-content\") pod \"community-operators-8vs6k\" (UID: \"b6d1c1c6-899d-4220-8f80-defae4ba56f0\") " pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.088301 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtnvz\" (UniqueName: \"kubernetes.io/projected/b6d1c1c6-899d-4220-8f80-defae4ba56f0-kube-api-access-mtnvz\") pod \"community-operators-8vs6k\" (UID: \"b6d1c1c6-899d-4220-8f80-defae4ba56f0\") " pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:11:58 crc kubenswrapper[4867]: E0214 04:11:58.088486 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:58.588467283 +0000 UTC m=+150.669404597 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.101930 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.106607 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.124818 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 04:11:58 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Feb 14 04:11:58 crc kubenswrapper[4867]: [+]process-running ok Feb 14 04:11:58 crc kubenswrapper[4867]: healthz check failed Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.124863 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.149252 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.179820 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x4khs"] Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.180078 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cf2e46b-a553-4b29-b6f2-02072b8660d9-utilities\") pod \"certified-operators-5mz22\" (UID: \"4cf2e46b-a553-4b29-b6f2-02072b8660d9\") " pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.180462 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cf2e46b-a553-4b29-b6f2-02072b8660d9-catalog-content\") pod \"certified-operators-5mz22\" (UID: \"4cf2e46b-a553-4b29-b6f2-02072b8660d9\") " pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.180945 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.197331 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtnvz\" (UniqueName: \"kubernetes.io/projected/b6d1c1c6-899d-4220-8f80-defae4ba56f0-kube-api-access-mtnvz\") pod \"community-operators-8vs6k\" (UID: \"b6d1c1c6-899d-4220-8f80-defae4ba56f0\") " pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.197493 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.197629 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d1c1c6-899d-4220-8f80-defae4ba56f0-utilities\") pod \"community-operators-8vs6k\" (UID: \"b6d1c1c6-899d-4220-8f80-defae4ba56f0\") " pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.197672 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d1c1c6-899d-4220-8f80-defae4ba56f0-catalog-content\") pod \"community-operators-8vs6k\" (UID: \"b6d1c1c6-899d-4220-8f80-defae4ba56f0\") " pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.200348 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d1c1c6-899d-4220-8f80-defae4ba56f0-utilities\") pod \"community-operators-8vs6k\" (UID: \"b6d1c1c6-899d-4220-8f80-defae4ba56f0\") " pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.201203 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d1c1c6-899d-4220-8f80-defae4ba56f0-catalog-content\") pod \"community-operators-8vs6k\" (UID: \"b6d1c1c6-899d-4220-8f80-defae4ba56f0\") " pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.209761 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 14 04:11:58 crc kubenswrapper[4867]: E0214 04:11:58.216214 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:58.71619711 +0000 UTC m=+150.797134414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.238605 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x4khs"] Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.248635 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtnvz\" (UniqueName: \"kubernetes.io/projected/b6d1c1c6-899d-4220-8f80-defae4ba56f0-kube-api-access-mtnvz\") pod \"community-operators-8vs6k\" (UID: \"b6d1c1c6-899d-4220-8f80-defae4ba56f0\") " pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.259671 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.260354 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.263383 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.263910 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.273922 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.298305 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.298601 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzh4n\" (UniqueName: \"kubernetes.io/projected/f27f899c-e2d8-4601-9a36-4582192436b7-kube-api-access-rzh4n\") pod \"certified-operators-x4khs\" (UID: \"f27f899c-e2d8-4601-9a36-4582192436b7\") " pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.298635 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f27f899c-e2d8-4601-9a36-4582192436b7-utilities\") pod \"certified-operators-x4khs\" (UID: \"f27f899c-e2d8-4601-9a36-4582192436b7\") " pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.298695 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f27f899c-e2d8-4601-9a36-4582192436b7-catalog-content\") pod \"certified-operators-x4khs\" (UID: \"f27f899c-e2d8-4601-9a36-4582192436b7\") " pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:11:58 crc kubenswrapper[4867]: E0214 04:11:58.298789 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:58.798774925 +0000 UTC m=+150.879712239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.343370 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.372652 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2cjxf"] Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.374282 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.386835 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2cjxf"] Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.391597 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.400563 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzh4n\" (UniqueName: \"kubernetes.io/projected/f27f899c-e2d8-4601-9a36-4582192436b7-kube-api-access-rzh4n\") pod \"certified-operators-x4khs\" (UID: \"f27f899c-e2d8-4601-9a36-4582192436b7\") " pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.400603 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f27f899c-e2d8-4601-9a36-4582192436b7-utilities\") pod \"certified-operators-x4khs\" (UID: \"f27f899c-e2d8-4601-9a36-4582192436b7\") " pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.400643 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.400661 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f27f899c-e2d8-4601-9a36-4582192436b7-catalog-content\") pod \"certified-operators-x4khs\" (UID: \"f27f899c-e2d8-4601-9a36-4582192436b7\") " pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.400719 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5be31bdb-ced4-4935-8102-e6ddc671474f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5be31bdb-ced4-4935-8102-e6ddc671474f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.400738 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5be31bdb-ced4-4935-8102-e6ddc671474f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5be31bdb-ced4-4935-8102-e6ddc671474f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.401376 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f27f899c-e2d8-4601-9a36-4582192436b7-utilities\") pod \"certified-operators-x4khs\" (UID: \"f27f899c-e2d8-4601-9a36-4582192436b7\") " pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:11:58 crc kubenswrapper[4867]: E0214 04:11:58.401629 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:58.901618747 +0000 UTC m=+150.982556061 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.401846 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f27f899c-e2d8-4601-9a36-4582192436b7-catalog-content\") pod \"certified-operators-x4khs\" (UID: \"f27f899c-e2d8-4601-9a36-4582192436b7\") " pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.441390 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzh4n\" (UniqueName: \"kubernetes.io/projected/f27f899c-e2d8-4601-9a36-4582192436b7-kube-api-access-rzh4n\") pod \"certified-operators-x4khs\" (UID: \"f27f899c-e2d8-4601-9a36-4582192436b7\") " pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.505881 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.505968 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.506200 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-catalog-content\") pod \"community-operators-2cjxf\" (UID: \"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6\") " pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.506259 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp526\" (UniqueName: \"kubernetes.io/projected/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-kube-api-access-mp526\") pod \"community-operators-2cjxf\" (UID: \"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6\") " pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.506286 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5be31bdb-ced4-4935-8102-e6ddc671474f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5be31bdb-ced4-4935-8102-e6ddc671474f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.506303 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5be31bdb-ced4-4935-8102-e6ddc671474f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5be31bdb-ced4-4935-8102-e6ddc671474f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.506321 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-utilities\") pod \"community-operators-2cjxf\" (UID: \"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6\") " pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.506438 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5be31bdb-ced4-4935-8102-e6ddc671474f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"5be31bdb-ced4-4935-8102-e6ddc671474f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 04:11:58 crc kubenswrapper[4867]: E0214 04:11:58.506561 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:59.006545233 +0000 UTC m=+151.087482547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.550935 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5be31bdb-ced4-4935-8102-e6ddc671474f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"5be31bdb-ced4-4935-8102-e6ddc671474f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.597125 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.607295 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp526\" (UniqueName: \"kubernetes.io/projected/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-kube-api-access-mp526\") pod \"community-operators-2cjxf\" (UID: \"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6\") " pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.607339 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-utilities\") pod \"community-operators-2cjxf\" (UID: \"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6\") " pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.608331 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-catalog-content\") pod \"community-operators-2cjxf\" (UID: \"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6\") " pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.608358 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.608548 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-utilities\") pod \"community-operators-2cjxf\" (UID: \"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6\") " pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:11:58 crc kubenswrapper[4867]: E0214 04:11:58.608626 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:59.108615445 +0000 UTC m=+151.189552749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.608836 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-catalog-content\") pod \"community-operators-2cjxf\" (UID: \"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6\") " pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.632837 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp526\" (UniqueName: \"kubernetes.io/projected/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-kube-api-access-mp526\") pod \"community-operators-2cjxf\" (UID: \"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6\") " pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.714097 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:58 crc kubenswrapper[4867]: E0214 04:11:58.714525 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:59.214489954 +0000 UTC m=+151.295427268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.717403 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8vs6k"] Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.734272 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.755077 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vs6k" event={"ID":"b6d1c1c6-899d-4220-8f80-defae4ba56f0","Type":"ContainerStarted","Data":"9ac639b6394c5e1017aeaf569eada5d729a39bf526b8497bd4296ca3b0755153"} Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.764991 4867 generic.go:334] "Generic (PLEG): container finished" podID="07dd9173-fdfe-4edb-821b-37c94116b53e" containerID="b5e5c1b68f534cc73bf83368aec1b5b6ddd64d982817b6a68fb05176cffabc6e" exitCode=0 Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.765082 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" event={"ID":"07dd9173-fdfe-4edb-821b-37c94116b53e","Type":"ContainerDied","Data":"b5e5c1b68f534cc73bf83368aec1b5b6ddd64d982817b6a68fb05176cffabc6e"} Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.783635 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"adff5c07-e04d-4412-9e26-a0d00b565646","Type":"ContainerStarted","Data":"377e295c3b007785a985a19cb9652f29604083f015986a2b6609275e06c00eb4"} Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.816184 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:58 crc kubenswrapper[4867]: E0214 04:11:58.816535 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:59.316521556 +0000 UTC m=+151.397458870 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.836567 4867 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.916892 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:58 crc kubenswrapper[4867]: E0214 04:11:58.917084 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:59.417062369 +0000 UTC m=+151.497999683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.917666 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:58 crc kubenswrapper[4867]: E0214 04:11:58.918721 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:59.418712211 +0000 UTC m=+151.499649575 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.919892 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5mz22"] Feb 14 04:11:58 crc kubenswrapper[4867]: I0214 04:11:58.962122 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x4khs"] Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.024715 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.024893 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.024931 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.024971 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.025027 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:59 crc kubenswrapper[4867]: E0214 04:11:59.025093 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-14 04:11:59.525064193 +0000 UTC m=+151.606001507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.030611 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.034125 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.067199 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.075871 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.106126 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 04:11:59 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Feb 14 04:11:59 crc kubenswrapper[4867]: [+]process-running ok Feb 14 04:11:59 crc kubenswrapper[4867]: healthz check failed Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.106165 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.125961 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:59 crc kubenswrapper[4867]: E0214 04:11:59.126251 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-14 04:11:59.626239772 +0000 UTC m=+151.707177086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-5rxcg" (UID: "c029599e-5014-4874-917f-076635849451") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.171385 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.198462 4867 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-14T04:11:58.836615758Z","Handler":null,"Name":""} Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.207500 4867 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.207551 4867 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 14 04:11:59 crc kubenswrapper[4867]: W0214 04:11:59.211211 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5be31bdb_ced4_4935_8102_e6ddc671474f.slice/crio-9a5067fa21df88aec15309e79d7720348fa24ff022d24e723cd4073f519393f9 WatchSource:0}: Error finding container 9a5067fa21df88aec15309e79d7720348fa24ff022d24e723cd4073f519393f9: Status 404 returned error can't find the container with id 9a5067fa21df88aec15309e79d7720348fa24ff022d24e723cd4073f519393f9 Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.226300 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.260861 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.262593 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.267079 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.273828 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.329591 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.332264 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.332289 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.333604 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.432957 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-config-volume\") pod \"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a\" (UID: \"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a\") " Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.433257 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6dn8\" (UniqueName: \"kubernetes.io/projected/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-kube-api-access-s6dn8\") pod \"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a\" (UID: \"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a\") " Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.433340 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-secret-volume\") pod \"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a\" (UID: \"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a\") " Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.433783 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-config-volume" (OuterVolumeSpecName: "config-volume") pod "71ac31c5-7a3b-4c18-aa9e-c193fa8f778a" (UID: "71ac31c5-7a3b-4c18-aa9e-c193fa8f778a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.444607 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "71ac31c5-7a3b-4c18-aa9e-c193fa8f778a" (UID: "71ac31c5-7a3b-4c18-aa9e-c193fa8f778a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.462479 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-kube-api-access-s6dn8" (OuterVolumeSpecName: "kube-api-access-s6dn8") pod "71ac31c5-7a3b-4c18-aa9e-c193fa8f778a" (UID: "71ac31c5-7a3b-4c18-aa9e-c193fa8f778a"). InnerVolumeSpecName "kube-api-access-s6dn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.467261 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-5rxcg\" (UID: \"c029599e-5014-4874-917f-076635849451\") " pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.522099 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2cjxf"] Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.537354 4867 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.537401 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.537415 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6dn8\" (UniqueName: \"kubernetes.io/projected/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a-kube-api-access-s6dn8\") on node \"crc\" DevicePath \"\"" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.577207 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.770051 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gvh7q"] Feb 14 04:11:59 crc kubenswrapper[4867]: E0214 04:11:59.770396 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71ac31c5-7a3b-4c18-aa9e-c193fa8f778a" containerName="collect-profiles" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.770407 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="71ac31c5-7a3b-4c18-aa9e-c193fa8f778a" containerName="collect-profiles" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.770496 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="71ac31c5-7a3b-4c18-aa9e-c193fa8f778a" containerName="collect-profiles" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.771350 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.774684 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.790784 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvh7q"] Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.801155 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" event={"ID":"7cedc5a6-929b-43ca-a8b0-6dca555ca455","Type":"ContainerStarted","Data":"dd9f424d26487bd816b5e8b2553faae6b604eacd9336a79c5c1317a6caa66f61"} Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.803642 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"adff5c07-e04d-4412-9e26-a0d00b565646","Type":"ContainerStarted","Data":"a215a1216cda74b0dbd2e2da4a16be436346ba36074b62928e5d1ff7177aee65"} Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.804649 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5be31bdb-ced4-4935-8102-e6ddc671474f","Type":"ContainerStarted","Data":"9a5067fa21df88aec15309e79d7720348fa24ff022d24e723cd4073f519393f9"} Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.806046 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vs6k" event={"ID":"b6d1c1c6-899d-4220-8f80-defae4ba56f0","Type":"ContainerStarted","Data":"3e14d895a14f4a0564f7f7e3c69189c69564a9ff087f2c6d784da1dda53743aa"} Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.806842 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mz22" event={"ID":"4cf2e46b-a553-4b29-b6f2-02072b8660d9","Type":"ContainerStarted","Data":"23ddca82e7ec32caacf54a7cebc1ffb43fed1e460daeba077f08fce659c5713c"} Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.808368 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" event={"ID":"71ac31c5-7a3b-4c18-aa9e-c193fa8f778a","Type":"ContainerDied","Data":"e4ca5c9cce4b1a413dbb012e458367afc39bde8f3194baa1bce21c05bfa3d89d"} Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.808408 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4ca5c9cce4b1a413dbb012e458367afc39bde8f3194baa1bce21c05bfa3d89d" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.808407 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.809740 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x4khs" event={"ID":"f27f899c-e2d8-4601-9a36-4582192436b7","Type":"ContainerStarted","Data":"3e5452fa8e8c6fb391a2e17ab4b7c984074e14d79a0538110dcd9e41b18bd839"} Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.817434 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-pzj5s" podStartSLOduration=15.817419425 podStartE2EDuration="15.817419425s" podCreationTimestamp="2026-02-14 04:11:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:11:59.81722956 +0000 UTC m=+151.898166874" watchObservedRunningTime="2026-02-14 04:11:59.817419425 +0000 UTC m=+151.898356739" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.945010 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f5g2\" (UniqueName: \"kubernetes.io/projected/2e834244-05c0-4e48-9e2a-7c69cf930951-kube-api-access-8f5g2\") pod \"redhat-marketplace-gvh7q\" (UID: \"2e834244-05c0-4e48-9e2a-7c69cf930951\") " pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.945070 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e834244-05c0-4e48-9e2a-7c69cf930951-utilities\") pod \"redhat-marketplace-gvh7q\" (UID: \"2e834244-05c0-4e48-9e2a-7c69cf930951\") " pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:11:59 crc kubenswrapper[4867]: I0214 04:11:59.945152 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e834244-05c0-4e48-9e2a-7c69cf930951-catalog-content\") pod \"redhat-marketplace-gvh7q\" (UID: \"2e834244-05c0-4e48-9e2a-7c69cf930951\") " pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:12:00 crc kubenswrapper[4867]: W0214 04:12:00.025015 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-dd9c8ef630798ffa3cc39d45ab72fceea03622579e03ea47273afd679887d81b WatchSource:0}: Error finding container dd9c8ef630798ffa3cc39d45ab72fceea03622579e03ea47273afd679887d81b: Status 404 returned error can't find the container with id dd9c8ef630798ffa3cc39d45ab72fceea03622579e03ea47273afd679887d81b Feb 14 04:12:00 crc kubenswrapper[4867]: W0214 04:12:00.026086 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-64386fcb522683f74940827e11a242c7aa41fcd9600688bca68515d34901637b WatchSource:0}: Error finding container 64386fcb522683f74940827e11a242c7aa41fcd9600688bca68515d34901637b: Status 404 returned error can't find the container with id 64386fcb522683f74940827e11a242c7aa41fcd9600688bca68515d34901637b Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.046167 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e834244-05c0-4e48-9e2a-7c69cf930951-catalog-content\") pod \"redhat-marketplace-gvh7q\" (UID: \"2e834244-05c0-4e48-9e2a-7c69cf930951\") " pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.046392 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f5g2\" (UniqueName: \"kubernetes.io/projected/2e834244-05c0-4e48-9e2a-7c69cf930951-kube-api-access-8f5g2\") pod \"redhat-marketplace-gvh7q\" (UID: \"2e834244-05c0-4e48-9e2a-7c69cf930951\") " pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.046639 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e834244-05c0-4e48-9e2a-7c69cf930951-utilities\") pod \"redhat-marketplace-gvh7q\" (UID: \"2e834244-05c0-4e48-9e2a-7c69cf930951\") " pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.046697 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e834244-05c0-4e48-9e2a-7c69cf930951-catalog-content\") pod \"redhat-marketplace-gvh7q\" (UID: \"2e834244-05c0-4e48-9e2a-7c69cf930951\") " pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.046884 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e834244-05c0-4e48-9e2a-7c69cf930951-utilities\") pod \"redhat-marketplace-gvh7q\" (UID: \"2e834244-05c0-4e48-9e2a-7c69cf930951\") " pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.068380 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f5g2\" (UniqueName: \"kubernetes.io/projected/2e834244-05c0-4e48-9e2a-7c69cf930951-kube-api-access-8f5g2\") pod \"redhat-marketplace-gvh7q\" (UID: \"2e834244-05c0-4e48-9e2a-7c69cf930951\") " pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.102706 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.107303 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 04:12:00 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Feb 14 04:12:00 crc kubenswrapper[4867]: [+]process-running ok Feb 14 04:12:00 crc kubenswrapper[4867]: healthz check failed Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.107429 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.176538 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s8hwg"] Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.183843 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.187150 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s8hwg"] Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.273217 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5rxcg"] Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.351584 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f7707be-b4dc-47c7-8a74-bc46399acd36-utilities\") pod \"redhat-marketplace-s8hwg\" (UID: \"1f7707be-b4dc-47c7-8a74-bc46399acd36\") " pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.351678 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f7707be-b4dc-47c7-8a74-bc46399acd36-catalog-content\") pod \"redhat-marketplace-s8hwg\" (UID: \"1f7707be-b4dc-47c7-8a74-bc46399acd36\") " pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.351731 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztsqf\" (UniqueName: \"kubernetes.io/projected/1f7707be-b4dc-47c7-8a74-bc46399acd36-kube-api-access-ztsqf\") pod \"redhat-marketplace-s8hwg\" (UID: \"1f7707be-b4dc-47c7-8a74-bc46399acd36\") " pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.454065 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f7707be-b4dc-47c7-8a74-bc46399acd36-utilities\") pod \"redhat-marketplace-s8hwg\" (UID: \"1f7707be-b4dc-47c7-8a74-bc46399acd36\") " pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.454480 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f7707be-b4dc-47c7-8a74-bc46399acd36-catalog-content\") pod \"redhat-marketplace-s8hwg\" (UID: \"1f7707be-b4dc-47c7-8a74-bc46399acd36\") " pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.454531 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztsqf\" (UniqueName: \"kubernetes.io/projected/1f7707be-b4dc-47c7-8a74-bc46399acd36-kube-api-access-ztsqf\") pod \"redhat-marketplace-s8hwg\" (UID: \"1f7707be-b4dc-47c7-8a74-bc46399acd36\") " pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.455269 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f7707be-b4dc-47c7-8a74-bc46399acd36-catalog-content\") pod \"redhat-marketplace-s8hwg\" (UID: \"1f7707be-b4dc-47c7-8a74-bc46399acd36\") " pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.455295 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f7707be-b4dc-47c7-8a74-bc46399acd36-utilities\") pod \"redhat-marketplace-s8hwg\" (UID: \"1f7707be-b4dc-47c7-8a74-bc46399acd36\") " pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.485039 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztsqf\" (UniqueName: \"kubernetes.io/projected/1f7707be-b4dc-47c7-8a74-bc46399acd36-kube-api-access-ztsqf\") pod \"redhat-marketplace-s8hwg\" (UID: \"1f7707be-b4dc-47c7-8a74-bc46399acd36\") " pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.687460 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.690551 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.741855 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvh7q"] Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.829915 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5be31bdb-ced4-4935-8102-e6ddc671474f","Type":"ContainerStarted","Data":"93e8b98a2ad31b4fa7402ae583c45be6e8f302edddc3396101d8d5532f77e5bf"} Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.833325 4867 generic.go:334] "Generic (PLEG): container finished" podID="4cf2e46b-a553-4b29-b6f2-02072b8660d9" containerID="af97fea8edd2f6f86bfcc865565c17f7057a140b45a31735d974db6d18d89c4d" exitCode=0 Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.833670 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mz22" event={"ID":"4cf2e46b-a553-4b29-b6f2-02072b8660d9","Type":"ContainerDied","Data":"af97fea8edd2f6f86bfcc865565c17f7057a140b45a31735d974db6d18d89c4d"} Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.842939 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.847982 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.847963479 podStartE2EDuration="2.847963479s" podCreationTimestamp="2026-02-14 04:11:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:12:00.847380434 +0000 UTC m=+152.928317748" watchObservedRunningTime="2026-02-14 04:12:00.847963479 +0000 UTC m=+152.928900793" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.848088 4867 generic.go:334] "Generic (PLEG): container finished" podID="f27f899c-e2d8-4601-9a36-4582192436b7" containerID="a4ecefe0bd25ea2146d501e1e030f255aa760e1d3b80ec52600bc04dede7435e" exitCode=0 Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.848154 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x4khs" event={"ID":"f27f899c-e2d8-4601-9a36-4582192436b7","Type":"ContainerDied","Data":"a4ecefe0bd25ea2146d501e1e030f255aa760e1d3b80ec52600bc04dede7435e"} Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.859552 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqcq7\" (UniqueName: \"kubernetes.io/projected/07dd9173-fdfe-4edb-821b-37c94116b53e-kube-api-access-bqcq7\") pod \"07dd9173-fdfe-4edb-821b-37c94116b53e\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.859616 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07dd9173-fdfe-4edb-821b-37c94116b53e-serving-cert\") pod \"07dd9173-fdfe-4edb-821b-37c94116b53e\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.859765 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-client-ca\") pod \"07dd9173-fdfe-4edb-821b-37c94116b53e\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.859799 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-config\") pod \"07dd9173-fdfe-4edb-821b-37c94116b53e\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.859828 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-proxy-ca-bundles\") pod \"07dd9173-fdfe-4edb-821b-37c94116b53e\" (UID: \"07dd9173-fdfe-4edb-821b-37c94116b53e\") " Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.861578 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-client-ca" (OuterVolumeSpecName: "client-ca") pod "07dd9173-fdfe-4edb-821b-37c94116b53e" (UID: "07dd9173-fdfe-4edb-821b-37c94116b53e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.862689 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "07dd9173-fdfe-4edb-821b-37c94116b53e" (UID: "07dd9173-fdfe-4edb-821b-37c94116b53e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.870273 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-config" (OuterVolumeSpecName: "config") pod "07dd9173-fdfe-4edb-821b-37c94116b53e" (UID: "07dd9173-fdfe-4edb-821b-37c94116b53e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.874696 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" event={"ID":"c029599e-5014-4874-917f-076635849451","Type":"ContainerStarted","Data":"6ea0765f93238181496aa9ad98328dd359db53721f5f5fd14d5d2d61c6d3b39b"} Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.881110 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2cjxf" event={"ID":"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6","Type":"ContainerStarted","Data":"7e41463addb663f771a8a5f2b9e7c4873429544544dd6087d30ba5633e2b13ff"} Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.881166 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2cjxf" event={"ID":"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6","Type":"ContainerStarted","Data":"add894549a2aff626db3cd5482bf5486b20d694394b5286fe468f9059e3f4b1d"} Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.883289 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"64386fcb522683f74940827e11a242c7aa41fcd9600688bca68515d34901637b"} Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.883785 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07dd9173-fdfe-4edb-821b-37c94116b53e-kube-api-access-bqcq7" (OuterVolumeSpecName: "kube-api-access-bqcq7") pod "07dd9173-fdfe-4edb-821b-37c94116b53e" (UID: "07dd9173-fdfe-4edb-821b-37c94116b53e"). InnerVolumeSpecName "kube-api-access-bqcq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.883868 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07dd9173-fdfe-4edb-821b-37c94116b53e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "07dd9173-fdfe-4edb-821b-37c94116b53e" (UID: "07dd9173-fdfe-4edb-821b-37c94116b53e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.898869 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvh7q" event={"ID":"2e834244-05c0-4e48-9e2a-7c69cf930951","Type":"ContainerStarted","Data":"90d63cc6554a718e0d4cbfb1e7b6d2e1fdaca86fdf3238edfbe5d97515589316"} Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.908821 4867 generic.go:334] "Generic (PLEG): container finished" podID="adff5c07-e04d-4412-9e26-a0d00b565646" containerID="a215a1216cda74b0dbd2e2da4a16be436346ba36074b62928e5d1ff7177aee65" exitCode=0 Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.908921 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"adff5c07-e04d-4412-9e26-a0d00b565646","Type":"ContainerDied","Data":"a215a1216cda74b0dbd2e2da4a16be436346ba36074b62928e5d1ff7177aee65"} Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.912095 4867 generic.go:334] "Generic (PLEG): container finished" podID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" containerID="3e14d895a14f4a0564f7f7e3c69189c69564a9ff087f2c6d784da1dda53743aa" exitCode=0 Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.912151 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vs6k" event={"ID":"b6d1c1c6-899d-4220-8f80-defae4ba56f0","Type":"ContainerDied","Data":"3e14d895a14f4a0564f7f7e3c69189c69564a9ff087f2c6d784da1dda53743aa"} Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.914820 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"d4a2627f95fd3c188ed05c0d5e7f958011284b03877cacfc4dda17d1cf310d54"} Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.916349 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" event={"ID":"07dd9173-fdfe-4edb-821b-37c94116b53e","Type":"ContainerDied","Data":"c43a26497795da97ad6a6c4586b62e12ae1ccaaa8dd33d4cfe17199345411003"} Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.916409 4867 scope.go:117] "RemoveContainer" containerID="b5e5c1b68f534cc73bf83368aec1b5b6ddd64d982817b6a68fb05176cffabc6e" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.916463 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-pctg8" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.924629 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"dd9c8ef630798ffa3cc39d45ab72fceea03622579e03ea47273afd679887d81b"} Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.965177 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.965213 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.965227 4867 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/07dd9173-fdfe-4edb-821b-37c94116b53e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.965240 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqcq7\" (UniqueName: \"kubernetes.io/projected/07dd9173-fdfe-4edb-821b-37c94116b53e-kube-api-access-bqcq7\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.965254 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07dd9173-fdfe-4edb-821b-37c94116b53e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.980336 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n9vq9"] Feb 14 04:12:00 crc kubenswrapper[4867]: E0214 04:12:00.980560 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07dd9173-fdfe-4edb-821b-37c94116b53e" containerName="controller-manager" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.980571 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="07dd9173-fdfe-4edb-821b-37c94116b53e" containerName="controller-manager" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.980768 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="07dd9173-fdfe-4edb-821b-37c94116b53e" containerName="controller-manager" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.981815 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.985357 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.986313 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pctg8"] Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.990546 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n9vq9"] Feb 14 04:12:00 crc kubenswrapper[4867]: I0214 04:12:00.994540 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-pctg8"] Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.005978 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07dd9173-fdfe-4edb-821b-37c94116b53e" path="/var/lib/kubelet/pods/07dd9173-fdfe-4edb-821b-37c94116b53e/volumes" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.006636 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.048096 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s8hwg"] Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.105859 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 04:12:01 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Feb 14 04:12:01 crc kubenswrapper[4867]: [+]process-running ok Feb 14 04:12:01 crc kubenswrapper[4867]: healthz check failed Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.106097 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.170551 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ce8d91-a436-4fe6-b5fd-1988e588ded8-utilities\") pod \"redhat-operators-n9vq9\" (UID: \"21ce8d91-a436-4fe6-b5fd-1988e588ded8\") " pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.170599 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ce8d91-a436-4fe6-b5fd-1988e588ded8-catalog-content\") pod \"redhat-operators-n9vq9\" (UID: \"21ce8d91-a436-4fe6-b5fd-1988e588ded8\") " pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.170632 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v76pr\" (UniqueName: \"kubernetes.io/projected/21ce8d91-a436-4fe6-b5fd-1988e588ded8-kube-api-access-v76pr\") pod \"redhat-operators-n9vq9\" (UID: \"21ce8d91-a436-4fe6-b5fd-1988e588ded8\") " pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.251400 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.251458 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.271631 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v76pr\" (UniqueName: \"kubernetes.io/projected/21ce8d91-a436-4fe6-b5fd-1988e588ded8-kube-api-access-v76pr\") pod \"redhat-operators-n9vq9\" (UID: \"21ce8d91-a436-4fe6-b5fd-1988e588ded8\") " pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.271787 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ce8d91-a436-4fe6-b5fd-1988e588ded8-utilities\") pod \"redhat-operators-n9vq9\" (UID: \"21ce8d91-a436-4fe6-b5fd-1988e588ded8\") " pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.271840 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ce8d91-a436-4fe6-b5fd-1988e588ded8-catalog-content\") pod \"redhat-operators-n9vq9\" (UID: \"21ce8d91-a436-4fe6-b5fd-1988e588ded8\") " pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.272281 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ce8d91-a436-4fe6-b5fd-1988e588ded8-utilities\") pod \"redhat-operators-n9vq9\" (UID: \"21ce8d91-a436-4fe6-b5fd-1988e588ded8\") " pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.272397 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ce8d91-a436-4fe6-b5fd-1988e588ded8-catalog-content\") pod \"redhat-operators-n9vq9\" (UID: \"21ce8d91-a436-4fe6-b5fd-1988e588ded8\") " pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.301457 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v76pr\" (UniqueName: \"kubernetes.io/projected/21ce8d91-a436-4fe6-b5fd-1988e588ded8-kube-api-access-v76pr\") pod \"redhat-operators-n9vq9\" (UID: \"21ce8d91-a436-4fe6-b5fd-1988e588ded8\") " pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.305735 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.369688 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jc878"] Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.370676 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.379181 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jc878"] Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.474723 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-utilities\") pod \"redhat-operators-jc878\" (UID: \"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab\") " pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.475131 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-catalog-content\") pod \"redhat-operators-jc878\" (UID: \"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab\") " pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.475166 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmkjt\" (UniqueName: \"kubernetes.io/projected/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-kube-api-access-nmkjt\") pod \"redhat-operators-jc878\" (UID: \"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab\") " pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.577336 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-catalog-content\") pod \"redhat-operators-jc878\" (UID: \"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab\") " pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.577384 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmkjt\" (UniqueName: \"kubernetes.io/projected/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-kube-api-access-nmkjt\") pod \"redhat-operators-jc878\" (UID: \"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab\") " pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.577414 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-utilities\") pod \"redhat-operators-jc878\" (UID: \"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab\") " pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.578138 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-utilities\") pod \"redhat-operators-jc878\" (UID: \"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab\") " pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.578206 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-catalog-content\") pod \"redhat-operators-jc878\" (UID: \"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab\") " pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.595093 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmkjt\" (UniqueName: \"kubernetes.io/projected/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-kube-api-access-nmkjt\") pod \"redhat-operators-jc878\" (UID: \"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab\") " pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.647771 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n9vq9"] Feb 14 04:12:01 crc kubenswrapper[4867]: W0214 04:12:01.659305 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21ce8d91_a436_4fe6_b5fd_1988e588ded8.slice/crio-4782354a698fe401c643d9fa5567f3591df600cf5a8f25b16b237312263df503 WatchSource:0}: Error finding container 4782354a698fe401c643d9fa5567f3591df600cf5a8f25b16b237312263df503: Status 404 returned error can't find the container with id 4782354a698fe401c643d9fa5567f3591df600cf5a8f25b16b237312263df503 Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.790497 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nt7fn"] Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.791686 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.793568 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.794909 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.795210 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.795253 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.795273 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.795633 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.799153 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nt7fn"] Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.802886 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.817720 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.880739 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-config\") pod \"controller-manager-879f6c89f-nt7fn\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.880786 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nt7fn\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.880809 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcj7j\" (UniqueName: \"kubernetes.io/projected/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-kube-api-access-vcj7j\") pod \"controller-manager-879f6c89f-nt7fn\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.880836 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-client-ca\") pod \"controller-manager-879f6c89f-nt7fn\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.880920 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-serving-cert\") pod \"controller-manager-879f6c89f-nt7fn\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.939270 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"1d273146d82792583f5426f64d40ca1b61c93f2ff6a5501b7da9405e4007554e"} Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.940682 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9vq9" event={"ID":"21ce8d91-a436-4fe6-b5fd-1988e588ded8","Type":"ContainerStarted","Data":"4782354a698fe401c643d9fa5567f3591df600cf5a8f25b16b237312263df503"} Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.943045 4867 generic.go:334] "Generic (PLEG): container finished" podID="5be31bdb-ced4-4935-8102-e6ddc671474f" containerID="93e8b98a2ad31b4fa7402ae583c45be6e8f302edddc3396101d8d5532f77e5bf" exitCode=0 Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.943141 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5be31bdb-ced4-4935-8102-e6ddc671474f","Type":"ContainerDied","Data":"93e8b98a2ad31b4fa7402ae583c45be6e8f302edddc3396101d8d5532f77e5bf"} Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.944494 4867 generic.go:334] "Generic (PLEG): container finished" podID="2e834244-05c0-4e48-9e2a-7c69cf930951" containerID="5ea24da634c74fd4522707557b46ec23669f943631ddc2b04acda4a65985a65f" exitCode=0 Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.944577 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvh7q" event={"ID":"2e834244-05c0-4e48-9e2a-7c69cf930951","Type":"ContainerDied","Data":"5ea24da634c74fd4522707557b46ec23669f943631ddc2b04acda4a65985a65f"} Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.946204 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"68b3b8905f1bedcc835a898b667bf7ab79f6fa8df4b53b86e14c3fef1d2938f6"} Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.946441 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.947602 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" event={"ID":"c029599e-5014-4874-917f-076635849451","Type":"ContainerStarted","Data":"984105ff3eb0991dfe28181ee193825f9011bc66c156c9de4b38deec4acb2517"} Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.947737 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.949100 4867 generic.go:334] "Generic (PLEG): container finished" podID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" containerID="7e41463addb663f771a8a5f2b9e7c4873429544544dd6087d30ba5633e2b13ff" exitCode=0 Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.949164 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2cjxf" event={"ID":"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6","Type":"ContainerDied","Data":"7e41463addb663f771a8a5f2b9e7c4873429544544dd6087d30ba5633e2b13ff"} Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.953566 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"6b246566537b41c130bb12c4f84dc51f22f10bd6f92a37c1c392801346072b07"} Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.958210 4867 generic.go:334] "Generic (PLEG): container finished" podID="1f7707be-b4dc-47c7-8a74-bc46399acd36" containerID="74feb7884ba2418ee7d549ee5577cf3938f772233b39e1dc8f5cc302e9984613" exitCode=0 Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.958287 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8hwg" event={"ID":"1f7707be-b4dc-47c7-8a74-bc46399acd36","Type":"ContainerDied","Data":"74feb7884ba2418ee7d549ee5577cf3938f772233b39e1dc8f5cc302e9984613"} Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.963812 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8hwg" event={"ID":"1f7707be-b4dc-47c7-8a74-bc46399acd36","Type":"ContainerStarted","Data":"9414f47d96386d3ff0af0fa0050f52950e5a9a8e484274e0b79dd8bd6d0a669b"} Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.981613 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-serving-cert\") pod \"controller-manager-879f6c89f-nt7fn\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.981669 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-config\") pod \"controller-manager-879f6c89f-nt7fn\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.981700 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nt7fn\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.981722 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcj7j\" (UniqueName: \"kubernetes.io/projected/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-kube-api-access-vcj7j\") pod \"controller-manager-879f6c89f-nt7fn\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.981758 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-client-ca\") pod \"controller-manager-879f6c89f-nt7fn\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.982947 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-client-ca\") pod \"controller-manager-879f6c89f-nt7fn\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.984310 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nt7fn\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.984322 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-config\") pod \"controller-manager-879f6c89f-nt7fn\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:01 crc kubenswrapper[4867]: I0214 04:12:01.994266 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-serving-cert\") pod \"controller-manager-879f6c89f-nt7fn\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.019037 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcj7j\" (UniqueName: \"kubernetes.io/projected/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-kube-api-access-vcj7j\") pod \"controller-manager-879f6c89f-nt7fn\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.050804 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" podStartSLOduration=131.050781596 podStartE2EDuration="2m11.050781596s" podCreationTimestamp="2026-02-14 04:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:12:02.047073732 +0000 UTC m=+154.128011046" watchObservedRunningTime="2026-02-14 04:12:02.050781596 +0000 UTC m=+154.131718910" Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.110819 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.118967 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.122231 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 04:12:02 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Feb 14 04:12:02 crc kubenswrapper[4867]: [+]process-running ok Feb 14 04:12:02 crc kubenswrapper[4867]: healthz check failed Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.122441 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.155684 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-8qkg2" Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.443167 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jc878"] Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.501524 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.574024 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nt7fn"] Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.599341 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/adff5c07-e04d-4412-9e26-a0d00b565646-kube-api-access\") pod \"adff5c07-e04d-4412-9e26-a0d00b565646\" (UID: \"adff5c07-e04d-4412-9e26-a0d00b565646\") " Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.600272 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/adff5c07-e04d-4412-9e26-a0d00b565646-kubelet-dir\") pod \"adff5c07-e04d-4412-9e26-a0d00b565646\" (UID: \"adff5c07-e04d-4412-9e26-a0d00b565646\") " Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.600351 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/adff5c07-e04d-4412-9e26-a0d00b565646-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "adff5c07-e04d-4412-9e26-a0d00b565646" (UID: "adff5c07-e04d-4412-9e26-a0d00b565646"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.600587 4867 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/adff5c07-e04d-4412-9e26-a0d00b565646-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.607203 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adff5c07-e04d-4412-9e26-a0d00b565646-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "adff5c07-e04d-4412-9e26-a0d00b565646" (UID: "adff5c07-e04d-4412-9e26-a0d00b565646"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.704864 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/adff5c07-e04d-4412-9e26-a0d00b565646-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.971625 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" event={"ID":"dd1a4559-f0ef-4bc6-b318-2c91b798b76d","Type":"ContainerStarted","Data":"9560a6c0d2908add05e4ca895184c5c2c58cffdd60f774e8164ccee333384db8"} Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.972071 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.972085 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" event={"ID":"dd1a4559-f0ef-4bc6-b318-2c91b798b76d","Type":"ContainerStarted","Data":"5207b73aaa57eb157e090896dbc459c86fd8684eae6a2b10610ff75ec8af8595"} Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.974425 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.974425 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"adff5c07-e04d-4412-9e26-a0d00b565646","Type":"ContainerDied","Data":"377e295c3b007785a985a19cb9652f29604083f015986a2b6609275e06c00eb4"} Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.974551 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="377e295c3b007785a985a19cb9652f29604083f015986a2b6609275e06c00eb4" Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.976217 4867 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-nt7fn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.976253 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" podUID="dd1a4559-f0ef-4bc6-b318-2c91b798b76d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.979082 4867 generic.go:334] "Generic (PLEG): container finished" podID="fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" containerID="32411749279c49995d30b3666ff88537eeae29bee0a978d984c3e86a4c392864" exitCode=0 Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.979115 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jc878" event={"ID":"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab","Type":"ContainerDied","Data":"32411749279c49995d30b3666ff88537eeae29bee0a978d984c3e86a4c392864"} Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.979179 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jc878" event={"ID":"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab","Type":"ContainerStarted","Data":"873ab4fab8bcde5b4877631fe5b476f986fe024be500dd128844b9b8ff975f35"} Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.982822 4867 generic.go:334] "Generic (PLEG): container finished" podID="21ce8d91-a436-4fe6-b5fd-1988e588ded8" containerID="743ba93f76979f5c122f709823ba46e2f882af89613e670bb5a5b1a6bbf930e3" exitCode=0 Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.985277 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9vq9" event={"ID":"21ce8d91-a436-4fe6-b5fd-1988e588ded8","Type":"ContainerDied","Data":"743ba93f76979f5c122f709823ba46e2f882af89613e670bb5a5b1a6bbf930e3"} Feb 14 04:12:02 crc kubenswrapper[4867]: I0214 04:12:02.998907 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" podStartSLOduration=5.998887249 podStartE2EDuration="5.998887249s" podCreationTimestamp="2026-02-14 04:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:12:02.98716389 +0000 UTC m=+155.068101204" watchObservedRunningTime="2026-02-14 04:12:02.998887249 +0000 UTC m=+155.079824563" Feb 14 04:12:03 crc kubenswrapper[4867]: I0214 04:12:03.105909 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 04:12:03 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Feb 14 04:12:03 crc kubenswrapper[4867]: [+]process-running ok Feb 14 04:12:03 crc kubenswrapper[4867]: healthz check failed Feb 14 04:12:03 crc kubenswrapper[4867]: I0214 04:12:03.105989 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 04:12:03 crc kubenswrapper[4867]: I0214 04:12:03.129292 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-gc8sl" Feb 14 04:12:03 crc kubenswrapper[4867]: I0214 04:12:03.473169 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 04:12:03 crc kubenswrapper[4867]: I0214 04:12:03.524941 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5be31bdb-ced4-4935-8102-e6ddc671474f-kubelet-dir\") pod \"5be31bdb-ced4-4935-8102-e6ddc671474f\" (UID: \"5be31bdb-ced4-4935-8102-e6ddc671474f\") " Feb 14 04:12:03 crc kubenswrapper[4867]: I0214 04:12:03.525023 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5be31bdb-ced4-4935-8102-e6ddc671474f-kube-api-access\") pod \"5be31bdb-ced4-4935-8102-e6ddc671474f\" (UID: \"5be31bdb-ced4-4935-8102-e6ddc671474f\") " Feb 14 04:12:03 crc kubenswrapper[4867]: I0214 04:12:03.525272 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5be31bdb-ced4-4935-8102-e6ddc671474f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5be31bdb-ced4-4935-8102-e6ddc671474f" (UID: "5be31bdb-ced4-4935-8102-e6ddc671474f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:12:03 crc kubenswrapper[4867]: I0214 04:12:03.541798 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5be31bdb-ced4-4935-8102-e6ddc671474f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5be31bdb-ced4-4935-8102-e6ddc671474f" (UID: "5be31bdb-ced4-4935-8102-e6ddc671474f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:12:03 crc kubenswrapper[4867]: I0214 04:12:03.626465 4867 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5be31bdb-ced4-4935-8102-e6ddc671474f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:03 crc kubenswrapper[4867]: I0214 04:12:03.626496 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5be31bdb-ced4-4935-8102-e6ddc671474f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:04 crc kubenswrapper[4867]: I0214 04:12:04.035813 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"5be31bdb-ced4-4935-8102-e6ddc671474f","Type":"ContainerDied","Data":"9a5067fa21df88aec15309e79d7720348fa24ff022d24e723cd4073f519393f9"} Feb 14 04:12:04 crc kubenswrapper[4867]: I0214 04:12:04.035859 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 14 04:12:04 crc kubenswrapper[4867]: I0214 04:12:04.035860 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a5067fa21df88aec15309e79d7720348fa24ff022d24e723cd4073f519393f9" Feb 14 04:12:04 crc kubenswrapper[4867]: I0214 04:12:04.042053 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:04 crc kubenswrapper[4867]: I0214 04:12:04.110953 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 04:12:04 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Feb 14 04:12:04 crc kubenswrapper[4867]: [+]process-running ok Feb 14 04:12:04 crc kubenswrapper[4867]: healthz check failed Feb 14 04:12:04 crc kubenswrapper[4867]: I0214 04:12:04.111010 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 04:12:05 crc kubenswrapper[4867]: I0214 04:12:05.108521 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:12:05 crc kubenswrapper[4867]: I0214 04:12:05.113177 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 04:12:06 crc kubenswrapper[4867]: I0214 04:12:06.919973 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:12:06 crc kubenswrapper[4867]: I0214 04:12:06.920029 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:12:06 crc kubenswrapper[4867]: I0214 04:12:06.919972 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:12:06 crc kubenswrapper[4867]: I0214 04:12:06.920130 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:12:07 crc kubenswrapper[4867]: I0214 04:12:07.312607 4867 patch_prober.go:28] interesting pod/console-f9d7485db-c4c52 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Feb 14 04:12:07 crc kubenswrapper[4867]: I0214 04:12:07.313003 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-c4c52" podUID="bb63883f-65f5-4107-877a-ff786d6c00f9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Feb 14 04:12:13 crc kubenswrapper[4867]: I0214 04:12:13.384124 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs\") pod \"network-metrics-daemon-4b6k5\" (UID: \"7206174b-645b-4924-8345-d1d4b1a5ec39\") " pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:12:13 crc kubenswrapper[4867]: I0214 04:12:13.401349 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7206174b-645b-4924-8345-d1d4b1a5ec39-metrics-certs\") pod \"network-metrics-daemon-4b6k5\" (UID: \"7206174b-645b-4924-8345-d1d4b1a5ec39\") " pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:12:13 crc kubenswrapper[4867]: I0214 04:12:13.613167 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4b6k5" Feb 14 04:12:16 crc kubenswrapper[4867]: I0214 04:12:16.919788 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:12:16 crc kubenswrapper[4867]: I0214 04:12:16.920089 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:12:16 crc kubenswrapper[4867]: I0214 04:12:16.920136 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-x9sjv" Feb 14 04:12:16 crc kubenswrapper[4867]: I0214 04:12:16.920749 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"6df86e37892d6555081dceb55f2b33fa3d058e82a95ff8722c4d3a8bd1c5bcb0"} pod="openshift-console/downloads-7954f5f757-x9sjv" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 14 04:12:16 crc kubenswrapper[4867]: I0214 04:12:16.920833 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" containerID="cri-o://6df86e37892d6555081dceb55f2b33fa3d058e82a95ff8722c4d3a8bd1c5bcb0" gracePeriod=2 Feb 14 04:12:16 crc kubenswrapper[4867]: I0214 04:12:16.919974 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:12:16 crc kubenswrapper[4867]: I0214 04:12:16.920938 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:12:16 crc kubenswrapper[4867]: I0214 04:12:16.921206 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:12:16 crc kubenswrapper[4867]: I0214 04:12:16.921271 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:12:17 crc kubenswrapper[4867]: I0214 04:12:17.027054 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nt7fn"] Feb 14 04:12:17 crc kubenswrapper[4867]: I0214 04:12:17.027356 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" podUID="dd1a4559-f0ef-4bc6-b318-2c91b798b76d" containerName="controller-manager" containerID="cri-o://9560a6c0d2908add05e4ca895184c5c2c58cffdd60f774e8164ccee333384db8" gracePeriod=30 Feb 14 04:12:17 crc kubenswrapper[4867]: I0214 04:12:17.052792 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h"] Feb 14 04:12:17 crc kubenswrapper[4867]: I0214 04:12:17.053018 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" podUID="14efaf39-985f-45ea-ab79-0b8b2044c7f7" containerName="route-controller-manager" containerID="cri-o://ffdcb8b4f0119bbfa4081845fbe7d22aac75e8abd20c4cfd6d4121782f9269ad" gracePeriod=30 Feb 14 04:12:17 crc kubenswrapper[4867]: I0214 04:12:17.316628 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:12:17 crc kubenswrapper[4867]: I0214 04:12:17.321616 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:12:17 crc kubenswrapper[4867]: I0214 04:12:17.829274 4867 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-29p6h container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 14 04:12:17 crc kubenswrapper[4867]: I0214 04:12:17.829341 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" podUID="14efaf39-985f-45ea-ab79-0b8b2044c7f7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 14 04:12:18 crc kubenswrapper[4867]: I0214 04:12:18.251431 4867 generic.go:334] "Generic (PLEG): container finished" podID="14efaf39-985f-45ea-ab79-0b8b2044c7f7" containerID="ffdcb8b4f0119bbfa4081845fbe7d22aac75e8abd20c4cfd6d4121782f9269ad" exitCode=0 Feb 14 04:12:18 crc kubenswrapper[4867]: I0214 04:12:18.251484 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" event={"ID":"14efaf39-985f-45ea-ab79-0b8b2044c7f7","Type":"ContainerDied","Data":"ffdcb8b4f0119bbfa4081845fbe7d22aac75e8abd20c4cfd6d4121782f9269ad"} Feb 14 04:12:18 crc kubenswrapper[4867]: I0214 04:12:18.252918 4867 generic.go:334] "Generic (PLEG): container finished" podID="dd1a4559-f0ef-4bc6-b318-2c91b798b76d" containerID="9560a6c0d2908add05e4ca895184c5c2c58cffdd60f774e8164ccee333384db8" exitCode=0 Feb 14 04:12:18 crc kubenswrapper[4867]: I0214 04:12:18.252964 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" event={"ID":"dd1a4559-f0ef-4bc6-b318-2c91b798b76d","Type":"ContainerDied","Data":"9560a6c0d2908add05e4ca895184c5c2c58cffdd60f774e8164ccee333384db8"} Feb 14 04:12:18 crc kubenswrapper[4867]: I0214 04:12:18.254353 4867 generic.go:334] "Generic (PLEG): container finished" podID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerID="6df86e37892d6555081dceb55f2b33fa3d058e82a95ff8722c4d3a8bd1c5bcb0" exitCode=0 Feb 14 04:12:18 crc kubenswrapper[4867]: I0214 04:12:18.254371 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-x9sjv" event={"ID":"72546cbc-3499-4110-b0e4-58beab7cc8a5","Type":"ContainerDied","Data":"6df86e37892d6555081dceb55f2b33fa3d058e82a95ff8722c4d3a8bd1c5bcb0"} Feb 14 04:12:19 crc kubenswrapper[4867]: I0214 04:12:19.585667 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:12:22 crc kubenswrapper[4867]: I0214 04:12:22.113119 4867 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-nt7fn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Feb 14 04:12:22 crc kubenswrapper[4867]: I0214 04:12:22.113449 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" podUID="dd1a4559-f0ef-4bc6-b318-2c91b798b76d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Feb 14 04:12:26 crc kubenswrapper[4867]: I0214 04:12:26.921121 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:12:26 crc kubenswrapper[4867]: I0214 04:12:26.921181 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:12:27 crc kubenswrapper[4867]: I0214 04:12:27.827556 4867 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-29p6h container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 14 04:12:27 crc kubenswrapper[4867]: I0214 04:12:27.827825 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" podUID="14efaf39-985f-45ea-ab79-0b8b2044c7f7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 14 04:12:28 crc kubenswrapper[4867]: I0214 04:12:28.063454 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" Feb 14 04:12:31 crc kubenswrapper[4867]: I0214 04:12:31.250948 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:12:31 crc kubenswrapper[4867]: I0214 04:12:31.251547 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:12:33 crc kubenswrapper[4867]: I0214 04:12:33.113550 4867 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-nt7fn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: i/o timeout" start-of-body= Feb 14 04:12:33 crc kubenswrapper[4867]: I0214 04:12:33.113624 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" podUID="dd1a4559-f0ef-4bc6-b318-2c91b798b76d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: i/o timeout" Feb 14 04:12:36 crc kubenswrapper[4867]: I0214 04:12:36.920023 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:12:36 crc kubenswrapper[4867]: I0214 04:12:36.920428 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:12:37 crc kubenswrapper[4867]: I0214 04:12:37.206149 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 14 04:12:37 crc kubenswrapper[4867]: E0214 04:12:37.206430 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adff5c07-e04d-4412-9e26-a0d00b565646" containerName="pruner" Feb 14 04:12:37 crc kubenswrapper[4867]: I0214 04:12:37.206452 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="adff5c07-e04d-4412-9e26-a0d00b565646" containerName="pruner" Feb 14 04:12:37 crc kubenswrapper[4867]: E0214 04:12:37.206467 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5be31bdb-ced4-4935-8102-e6ddc671474f" containerName="pruner" Feb 14 04:12:37 crc kubenswrapper[4867]: I0214 04:12:37.206475 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5be31bdb-ced4-4935-8102-e6ddc671474f" containerName="pruner" Feb 14 04:12:37 crc kubenswrapper[4867]: I0214 04:12:37.206614 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="adff5c07-e04d-4412-9e26-a0d00b565646" containerName="pruner" Feb 14 04:12:37 crc kubenswrapper[4867]: I0214 04:12:37.206633 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5be31bdb-ced4-4935-8102-e6ddc671474f" containerName="pruner" Feb 14 04:12:37 crc kubenswrapper[4867]: I0214 04:12:37.207116 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 04:12:37 crc kubenswrapper[4867]: I0214 04:12:37.208801 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 14 04:12:37 crc kubenswrapper[4867]: I0214 04:12:37.209747 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 14 04:12:37 crc kubenswrapper[4867]: I0214 04:12:37.219032 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 14 04:12:37 crc kubenswrapper[4867]: I0214 04:12:37.320917 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f1bacbd-3b75-4814-83cc-1569cbbf36bb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3f1bacbd-3b75-4814-83cc-1569cbbf36bb\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 04:12:37 crc kubenswrapper[4867]: I0214 04:12:37.321084 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f1bacbd-3b75-4814-83cc-1569cbbf36bb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3f1bacbd-3b75-4814-83cc-1569cbbf36bb\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 04:12:37 crc kubenswrapper[4867]: I0214 04:12:37.422034 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f1bacbd-3b75-4814-83cc-1569cbbf36bb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3f1bacbd-3b75-4814-83cc-1569cbbf36bb\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 04:12:37 crc kubenswrapper[4867]: I0214 04:12:37.422120 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f1bacbd-3b75-4814-83cc-1569cbbf36bb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3f1bacbd-3b75-4814-83cc-1569cbbf36bb\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 04:12:37 crc kubenswrapper[4867]: I0214 04:12:37.422259 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f1bacbd-3b75-4814-83cc-1569cbbf36bb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3f1bacbd-3b75-4814-83cc-1569cbbf36bb\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 04:12:37 crc kubenswrapper[4867]: I0214 04:12:37.460432 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f1bacbd-3b75-4814-83cc-1569cbbf36bb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3f1bacbd-3b75-4814-83cc-1569cbbf36bb\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 04:12:37 crc kubenswrapper[4867]: I0214 04:12:37.534497 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.108909 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.114663 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.142424 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-748d4597b7-zr2sc"] Feb 14 04:12:38 crc kubenswrapper[4867]: E0214 04:12:38.142746 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd1a4559-f0ef-4bc6-b318-2c91b798b76d" containerName="controller-manager" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.142762 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd1a4559-f0ef-4bc6-b318-2c91b798b76d" containerName="controller-manager" Feb 14 04:12:38 crc kubenswrapper[4867]: E0214 04:12:38.142773 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14efaf39-985f-45ea-ab79-0b8b2044c7f7" containerName="route-controller-manager" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.142783 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="14efaf39-985f-45ea-ab79-0b8b2044c7f7" containerName="route-controller-manager" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.142955 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="14efaf39-985f-45ea-ab79-0b8b2044c7f7" containerName="route-controller-manager" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.142976 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd1a4559-f0ef-4bc6-b318-2c91b798b76d" containerName="controller-manager" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.143680 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.208107 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-748d4597b7-zr2sc"] Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.242769 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2kd6\" (UniqueName: \"kubernetes.io/projected/14efaf39-985f-45ea-ab79-0b8b2044c7f7-kube-api-access-q2kd6\") pod \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\" (UID: \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\") " Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.242817 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14efaf39-985f-45ea-ab79-0b8b2044c7f7-client-ca\") pod \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\" (UID: \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\") " Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.242899 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14efaf39-985f-45ea-ab79-0b8b2044c7f7-serving-cert\") pod \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\" (UID: \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\") " Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.242927 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-proxy-ca-bundles\") pod \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.242955 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-serving-cert\") pod \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.242984 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcj7j\" (UniqueName: \"kubernetes.io/projected/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-kube-api-access-vcj7j\") pod \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.243029 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-config\") pod \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.243093 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14efaf39-985f-45ea-ab79-0b8b2044c7f7-config\") pod \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\" (UID: \"14efaf39-985f-45ea-ab79-0b8b2044c7f7\") " Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.243115 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-client-ca\") pod \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\" (UID: \"dd1a4559-f0ef-4bc6-b318-2c91b798b76d\") " Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.243275 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-proxy-ca-bundles\") pod \"controller-manager-748d4597b7-zr2sc\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.243317 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-config\") pod \"controller-manager-748d4597b7-zr2sc\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.243408 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-client-ca\") pod \"controller-manager-748d4597b7-zr2sc\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.243441 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sbq9\" (UniqueName: \"kubernetes.io/projected/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-kube-api-access-7sbq9\") pod \"controller-manager-748d4597b7-zr2sc\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.243473 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-serving-cert\") pod \"controller-manager-748d4597b7-zr2sc\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.243971 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-client-ca" (OuterVolumeSpecName: "client-ca") pod "dd1a4559-f0ef-4bc6-b318-2c91b798b76d" (UID: "dd1a4559-f0ef-4bc6-b318-2c91b798b76d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.244113 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14efaf39-985f-45ea-ab79-0b8b2044c7f7-config" (OuterVolumeSpecName: "config") pod "14efaf39-985f-45ea-ab79-0b8b2044c7f7" (UID: "14efaf39-985f-45ea-ab79-0b8b2044c7f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.244186 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14efaf39-985f-45ea-ab79-0b8b2044c7f7-client-ca" (OuterVolumeSpecName: "client-ca") pod "14efaf39-985f-45ea-ab79-0b8b2044c7f7" (UID: "14efaf39-985f-45ea-ab79-0b8b2044c7f7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.244311 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "dd1a4559-f0ef-4bc6-b318-2c91b798b76d" (UID: "dd1a4559-f0ef-4bc6-b318-2c91b798b76d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.244368 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-config" (OuterVolumeSpecName: "config") pod "dd1a4559-f0ef-4bc6-b318-2c91b798b76d" (UID: "dd1a4559-f0ef-4bc6-b318-2c91b798b76d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.248438 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dd1a4559-f0ef-4bc6-b318-2c91b798b76d" (UID: "dd1a4559-f0ef-4bc6-b318-2c91b798b76d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.248740 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14efaf39-985f-45ea-ab79-0b8b2044c7f7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "14efaf39-985f-45ea-ab79-0b8b2044c7f7" (UID: "14efaf39-985f-45ea-ab79-0b8b2044c7f7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.249189 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-kube-api-access-vcj7j" (OuterVolumeSpecName: "kube-api-access-vcj7j") pod "dd1a4559-f0ef-4bc6-b318-2c91b798b76d" (UID: "dd1a4559-f0ef-4bc6-b318-2c91b798b76d"). InnerVolumeSpecName "kube-api-access-vcj7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.250295 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14efaf39-985f-45ea-ab79-0b8b2044c7f7-kube-api-access-q2kd6" (OuterVolumeSpecName: "kube-api-access-q2kd6") pod "14efaf39-985f-45ea-ab79-0b8b2044c7f7" (UID: "14efaf39-985f-45ea-ab79-0b8b2044c7f7"). InnerVolumeSpecName "kube-api-access-q2kd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.344733 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-config\") pod \"controller-manager-748d4597b7-zr2sc\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.344834 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-client-ca\") pod \"controller-manager-748d4597b7-zr2sc\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.344859 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sbq9\" (UniqueName: \"kubernetes.io/projected/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-kube-api-access-7sbq9\") pod \"controller-manager-748d4597b7-zr2sc\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.344884 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-serving-cert\") pod \"controller-manager-748d4597b7-zr2sc\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.344909 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-proxy-ca-bundles\") pod \"controller-manager-748d4597b7-zr2sc\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.344953 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.344965 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14efaf39-985f-45ea-ab79-0b8b2044c7f7-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.344975 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.344983 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2kd6\" (UniqueName: \"kubernetes.io/projected/14efaf39-985f-45ea-ab79-0b8b2044c7f7-kube-api-access-q2kd6\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.344993 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14efaf39-985f-45ea-ab79-0b8b2044c7f7-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.345001 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14efaf39-985f-45ea-ab79-0b8b2044c7f7-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.345010 4867 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.345020 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.345032 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcj7j\" (UniqueName: \"kubernetes.io/projected/dd1a4559-f0ef-4bc6-b318-2c91b798b76d-kube-api-access-vcj7j\") on node \"crc\" DevicePath \"\"" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.346484 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-config\") pod \"controller-manager-748d4597b7-zr2sc\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.346749 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-proxy-ca-bundles\") pod \"controller-manager-748d4597b7-zr2sc\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.347155 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-client-ca\") pod \"controller-manager-748d4597b7-zr2sc\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.350005 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-serving-cert\") pod \"controller-manager-748d4597b7-zr2sc\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.361626 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sbq9\" (UniqueName: \"kubernetes.io/projected/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-kube-api-access-7sbq9\") pod \"controller-manager-748d4597b7-zr2sc\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.393710 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" event={"ID":"14efaf39-985f-45ea-ab79-0b8b2044c7f7","Type":"ContainerDied","Data":"d80c060a94d17951aad5e051f55bf43d373a158b1129e1b3c3d94726f3601c49"} Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.393800 4867 scope.go:117] "RemoveContainer" containerID="ffdcb8b4f0119bbfa4081845fbe7d22aac75e8abd20c4cfd6d4121782f9269ad" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.393747 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.397287 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" event={"ID":"dd1a4559-f0ef-4bc6-b318-2c91b798b76d","Type":"ContainerDied","Data":"5207b73aaa57eb157e090896dbc459c86fd8684eae6a2b10610ff75ec8af8595"} Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.397473 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nt7fn" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.423632 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h"] Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.427410 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h"] Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.439370 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nt7fn"] Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.445136 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nt7fn"] Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.466540 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.828421 4867 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-29p6h container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 04:12:38 crc kubenswrapper[4867]: I0214 04:12:38.828608 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-29p6h" podUID="14efaf39-985f-45ea-ab79-0b8b2044c7f7" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 04:12:39 crc kubenswrapper[4867]: I0214 04:12:39.013916 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14efaf39-985f-45ea-ab79-0b8b2044c7f7" path="/var/lib/kubelet/pods/14efaf39-985f-45ea-ab79-0b8b2044c7f7/volumes" Feb 14 04:12:39 crc kubenswrapper[4867]: I0214 04:12:39.015469 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd1a4559-f0ef-4bc6-b318-2c91b798b76d" path="/var/lib/kubelet/pods/dd1a4559-f0ef-4bc6-b318-2c91b798b76d/volumes" Feb 14 04:12:39 crc kubenswrapper[4867]: I0214 04:12:39.278605 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.823097 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8"] Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.825755 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.831895 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.832064 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.832136 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.832219 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.832229 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.832340 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.842728 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8"] Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.876267 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9320aa8-606f-42da-94c7-886ddd1a0646-client-ca\") pod \"route-controller-manager-74548f6c84-krdz8\" (UID: \"b9320aa8-606f-42da-94c7-886ddd1a0646\") " pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.876328 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9320aa8-606f-42da-94c7-886ddd1a0646-serving-cert\") pod \"route-controller-manager-74548f6c84-krdz8\" (UID: \"b9320aa8-606f-42da-94c7-886ddd1a0646\") " pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.876358 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9320aa8-606f-42da-94c7-886ddd1a0646-config\") pod \"route-controller-manager-74548f6c84-krdz8\" (UID: \"b9320aa8-606f-42da-94c7-886ddd1a0646\") " pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.876428 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9fs9\" (UniqueName: \"kubernetes.io/projected/b9320aa8-606f-42da-94c7-886ddd1a0646-kube-api-access-g9fs9\") pod \"route-controller-manager-74548f6c84-krdz8\" (UID: \"b9320aa8-606f-42da-94c7-886ddd1a0646\") " pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.977869 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9fs9\" (UniqueName: \"kubernetes.io/projected/b9320aa8-606f-42da-94c7-886ddd1a0646-kube-api-access-g9fs9\") pod \"route-controller-manager-74548f6c84-krdz8\" (UID: \"b9320aa8-606f-42da-94c7-886ddd1a0646\") " pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.977973 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9320aa8-606f-42da-94c7-886ddd1a0646-client-ca\") pod \"route-controller-manager-74548f6c84-krdz8\" (UID: \"b9320aa8-606f-42da-94c7-886ddd1a0646\") " pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.978014 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9320aa8-606f-42da-94c7-886ddd1a0646-serving-cert\") pod \"route-controller-manager-74548f6c84-krdz8\" (UID: \"b9320aa8-606f-42da-94c7-886ddd1a0646\") " pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.978042 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9320aa8-606f-42da-94c7-886ddd1a0646-config\") pod \"route-controller-manager-74548f6c84-krdz8\" (UID: \"b9320aa8-606f-42da-94c7-886ddd1a0646\") " pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.979239 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9320aa8-606f-42da-94c7-886ddd1a0646-client-ca\") pod \"route-controller-manager-74548f6c84-krdz8\" (UID: \"b9320aa8-606f-42da-94c7-886ddd1a0646\") " pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.979531 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9320aa8-606f-42da-94c7-886ddd1a0646-config\") pod \"route-controller-manager-74548f6c84-krdz8\" (UID: \"b9320aa8-606f-42da-94c7-886ddd1a0646\") " pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.982089 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9320aa8-606f-42da-94c7-886ddd1a0646-serving-cert\") pod \"route-controller-manager-74548f6c84-krdz8\" (UID: \"b9320aa8-606f-42da-94c7-886ddd1a0646\") " pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:12:40 crc kubenswrapper[4867]: I0214 04:12:40.996034 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9fs9\" (UniqueName: \"kubernetes.io/projected/b9320aa8-606f-42da-94c7-886ddd1a0646-kube-api-access-g9fs9\") pod \"route-controller-manager-74548f6c84-krdz8\" (UID: \"b9320aa8-606f-42da-94c7-886ddd1a0646\") " pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:12:41 crc kubenswrapper[4867]: I0214 04:12:41.150902 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:12:42 crc kubenswrapper[4867]: I0214 04:12:42.811725 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 14 04:12:42 crc kubenswrapper[4867]: I0214 04:12:42.813596 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 14 04:12:42 crc kubenswrapper[4867]: I0214 04:12:42.815647 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 14 04:12:42 crc kubenswrapper[4867]: I0214 04:12:42.903224 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0e717e9c-3ff4-420e-8f69-26044fc5e482-kubelet-dir\") pod \"installer-9-crc\" (UID: \"0e717e9c-3ff4-420e-8f69-26044fc5e482\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 04:12:42 crc kubenswrapper[4867]: I0214 04:12:42.903273 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e717e9c-3ff4-420e-8f69-26044fc5e482-kube-api-access\") pod \"installer-9-crc\" (UID: \"0e717e9c-3ff4-420e-8f69-26044fc5e482\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 04:12:42 crc kubenswrapper[4867]: I0214 04:12:42.903297 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0e717e9c-3ff4-420e-8f69-26044fc5e482-var-lock\") pod \"installer-9-crc\" (UID: \"0e717e9c-3ff4-420e-8f69-26044fc5e482\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 04:12:43 crc kubenswrapper[4867]: I0214 04:12:43.004322 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0e717e9c-3ff4-420e-8f69-26044fc5e482-kubelet-dir\") pod \"installer-9-crc\" (UID: \"0e717e9c-3ff4-420e-8f69-26044fc5e482\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 04:12:43 crc kubenswrapper[4867]: I0214 04:12:43.004394 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e717e9c-3ff4-420e-8f69-26044fc5e482-kube-api-access\") pod \"installer-9-crc\" (UID: \"0e717e9c-3ff4-420e-8f69-26044fc5e482\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 04:12:43 crc kubenswrapper[4867]: I0214 04:12:43.004433 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0e717e9c-3ff4-420e-8f69-26044fc5e482-var-lock\") pod \"installer-9-crc\" (UID: \"0e717e9c-3ff4-420e-8f69-26044fc5e482\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 04:12:43 crc kubenswrapper[4867]: I0214 04:12:43.004496 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0e717e9c-3ff4-420e-8f69-26044fc5e482-kubelet-dir\") pod \"installer-9-crc\" (UID: \"0e717e9c-3ff4-420e-8f69-26044fc5e482\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 04:12:43 crc kubenswrapper[4867]: I0214 04:12:43.004586 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0e717e9c-3ff4-420e-8f69-26044fc5e482-var-lock\") pod \"installer-9-crc\" (UID: \"0e717e9c-3ff4-420e-8f69-26044fc5e482\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 04:12:43 crc kubenswrapper[4867]: I0214 04:12:43.022599 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e717e9c-3ff4-420e-8f69-26044fc5e482-kube-api-access\") pod \"installer-9-crc\" (UID: \"0e717e9c-3ff4-420e-8f69-26044fc5e482\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 14 04:12:43 crc kubenswrapper[4867]: I0214 04:12:43.171185 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 14 04:12:45 crc kubenswrapper[4867]: E0214 04:12:45.351096 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 14 04:12:45 crc kubenswrapper[4867]: E0214 04:12:45.351581 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v76pr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-n9vq9_openshift-marketplace(21ce8d91-a436-4fe6-b5fd-1988e588ded8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 04:12:45 crc kubenswrapper[4867]: E0214 04:12:45.352783 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-n9vq9" podUID="21ce8d91-a436-4fe6-b5fd-1988e588ded8" Feb 14 04:12:46 crc kubenswrapper[4867]: I0214 04:12:46.920205 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:12:46 crc kubenswrapper[4867]: I0214 04:12:46.920258 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:12:47 crc kubenswrapper[4867]: E0214 04:12:47.607000 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-n9vq9" podUID="21ce8d91-a436-4fe6-b5fd-1988e588ded8" Feb 14 04:12:48 crc kubenswrapper[4867]: E0214 04:12:48.984827 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 14 04:12:48 crc kubenswrapper[4867]: E0214 04:12:48.985315 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rmwl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-5mz22_openshift-marketplace(4cf2e46b-a553-4b29-b6f2-02072b8660d9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 04:12:48 crc kubenswrapper[4867]: E0214 04:12:48.986474 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-5mz22" podUID="4cf2e46b-a553-4b29-b6f2-02072b8660d9" Feb 14 04:12:56 crc kubenswrapper[4867]: E0214 04:12:56.251603 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-5mz22" podUID="4cf2e46b-a553-4b29-b6f2-02072b8660d9" Feb 14 04:12:56 crc kubenswrapper[4867]: E0214 04:12:56.270616 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 14 04:12:56 crc kubenswrapper[4867]: E0214 04:12:56.270823 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rzh4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-x4khs_openshift-marketplace(f27f899c-e2d8-4601-9a36-4582192436b7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 04:12:56 crc kubenswrapper[4867]: E0214 04:12:56.271944 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-x4khs" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" Feb 14 04:12:56 crc kubenswrapper[4867]: I0214 04:12:56.918966 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:12:56 crc kubenswrapper[4867]: I0214 04:12:56.919393 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:13:00 crc kubenswrapper[4867]: E0214 04:13:00.831702 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 14 04:13:00 crc kubenswrapper[4867]: E0214 04:13:00.832021 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mtnvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8vs6k_openshift-marketplace(b6d1c1c6-899d-4220-8f80-defae4ba56f0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 04:13:00 crc kubenswrapper[4867]: E0214 04:13:00.833257 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-8vs6k" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" Feb 14 04:13:01 crc kubenswrapper[4867]: I0214 04:13:01.250770 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:13:01 crc kubenswrapper[4867]: I0214 04:13:01.250845 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:13:01 crc kubenswrapper[4867]: I0214 04:13:01.250904 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:13:01 crc kubenswrapper[4867]: I0214 04:13:01.252241 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 04:13:01 crc kubenswrapper[4867]: I0214 04:13:01.253558 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3" gracePeriod=600 Feb 14 04:13:01 crc kubenswrapper[4867]: E0214 04:13:01.339093 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 14 04:13:01 crc kubenswrapper[4867]: E0214 04:13:01.339242 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nmkjt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-jc878_openshift-marketplace(fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 04:13:01 crc kubenswrapper[4867]: E0214 04:13:01.340534 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-jc878" podUID="fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" Feb 14 04:13:02 crc kubenswrapper[4867]: I0214 04:13:02.530773 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3" exitCode=0 Feb 14 04:13:02 crc kubenswrapper[4867]: I0214 04:13:02.530820 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3"} Feb 14 04:13:03 crc kubenswrapper[4867]: E0214 04:13:03.891945 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 14 04:13:03 crc kubenswrapper[4867]: E0214 04:13:03.892174 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mp526,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-2cjxf_openshift-marketplace(0683c2f1-5695-4ef3-b6cc-31fe804c6dc6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 04:13:03 crc kubenswrapper[4867]: E0214 04:13:03.893443 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-2cjxf" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" Feb 14 04:13:05 crc kubenswrapper[4867]: E0214 04:13:05.080353 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-jc878" podUID="fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" Feb 14 04:13:05 crc kubenswrapper[4867]: E0214 04:13:05.081025 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2cjxf" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" Feb 14 04:13:05 crc kubenswrapper[4867]: E0214 04:13:05.081182 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8vs6k" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" Feb 14 04:13:05 crc kubenswrapper[4867]: E0214 04:13:05.081202 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-x4khs" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" Feb 14 04:13:05 crc kubenswrapper[4867]: I0214 04:13:05.158195 4867 scope.go:117] "RemoveContainer" containerID="9560a6c0d2908add05e4ca895184c5c2c58cffdd60f774e8164ccee333384db8" Feb 14 04:13:05 crc kubenswrapper[4867]: I0214 04:13:05.440540 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 14 04:13:05 crc kubenswrapper[4867]: I0214 04:13:05.527448 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 14 04:13:05 crc kubenswrapper[4867]: W0214 04:13:05.534007 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0e717e9c_3ff4_420e_8f69_26044fc5e482.slice/crio-798ef9a3e213da1cc192f6e3e40f1dc1868f826121e73963d17a6206d8028438 WatchSource:0}: Error finding container 798ef9a3e213da1cc192f6e3e40f1dc1868f826121e73963d17a6206d8028438: Status 404 returned error can't find the container with id 798ef9a3e213da1cc192f6e3e40f1dc1868f826121e73963d17a6206d8028438 Feb 14 04:13:05 crc kubenswrapper[4867]: I0214 04:13:05.564422 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3f1bacbd-3b75-4814-83cc-1569cbbf36bb","Type":"ContainerStarted","Data":"eb0e14a1c0feea853b78fcf6336be6031c436d6dec3e3102e524efe8fc4064cc"} Feb 14 04:13:05 crc kubenswrapper[4867]: I0214 04:13:05.565396 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0e717e9c-3ff4-420e-8f69-26044fc5e482","Type":"ContainerStarted","Data":"798ef9a3e213da1cc192f6e3e40f1dc1868f826121e73963d17a6206d8028438"} Feb 14 04:13:05 crc kubenswrapper[4867]: I0214 04:13:05.565577 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-4b6k5"] Feb 14 04:13:05 crc kubenswrapper[4867]: W0214 04:13:05.573809 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7206174b_645b_4924_8345_d1d4b1a5ec39.slice/crio-d4a66555ea7fd71658fde0e679ecdc654cf768c20dd8915504ae31493fc1728c WatchSource:0}: Error finding container d4a66555ea7fd71658fde0e679ecdc654cf768c20dd8915504ae31493fc1728c: Status 404 returned error can't find the container with id d4a66555ea7fd71658fde0e679ecdc654cf768c20dd8915504ae31493fc1728c Feb 14 04:13:05 crc kubenswrapper[4867]: I0214 04:13:05.728592 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-748d4597b7-zr2sc"] Feb 14 04:13:05 crc kubenswrapper[4867]: I0214 04:13:05.746695 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8"] Feb 14 04:13:05 crc kubenswrapper[4867]: W0214 04:13:05.760191 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc312f687_8694_4be3_a1ac_ddb1a0e8e1e6.slice/crio-e4132b3ddfc13f1765cbd4d8f6a797c02ea70c5da037aeea7a90fb80fbf566d7 WatchSource:0}: Error finding container e4132b3ddfc13f1765cbd4d8f6a797c02ea70c5da037aeea7a90fb80fbf566d7: Status 404 returned error can't find the container with id e4132b3ddfc13f1765cbd4d8f6a797c02ea70c5da037aeea7a90fb80fbf566d7 Feb 14 04:13:06 crc kubenswrapper[4867]: E0214 04:13:06.481641 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 14 04:13:06 crc kubenswrapper[4867]: E0214 04:13:06.482106 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8f5g2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-gvh7q_openshift-marketplace(2e834244-05c0-4e48-9e2a-7c69cf930951): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 04:13:06 crc kubenswrapper[4867]: E0214 04:13:06.483457 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-gvh7q" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.574144 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" event={"ID":"b9320aa8-606f-42da-94c7-886ddd1a0646","Type":"ContainerStarted","Data":"f157b04c5dcfd4a5e66739ecf3f255670013221d2f63682930806f03de907180"} Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.574535 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" event={"ID":"b9320aa8-606f-42da-94c7-886ddd1a0646","Type":"ContainerStarted","Data":"541ea6e9e6c3a77aac7816654698f9c602bfc9a3197a2fd757215b2f093807ec"} Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.574894 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.577489 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3f1bacbd-3b75-4814-83cc-1569cbbf36bb","Type":"ContainerStarted","Data":"1e9a67aeecba2c81f700639b7605c079fdc674717d231a63f286887b5989d232"} Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.579402 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0e717e9c-3ff4-420e-8f69-26044fc5e482","Type":"ContainerStarted","Data":"e88a66dab5c2b34dc63a7059bdf03187c70eb6a356f22173e8d8866831ed9219"} Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.581349 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" event={"ID":"7206174b-645b-4924-8345-d1d4b1a5ec39","Type":"ContainerStarted","Data":"428657683c188c9151d48ece253a11bffad3e756aad099cbaca848114b650376"} Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.581376 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" event={"ID":"7206174b-645b-4924-8345-d1d4b1a5ec39","Type":"ContainerStarted","Data":"d4a66555ea7fd71658fde0e679ecdc654cf768c20dd8915504ae31493fc1728c"} Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.583241 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" event={"ID":"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6","Type":"ContainerStarted","Data":"d4aead393cb2b02a428fb28661f16918a1873ee0f2ed4a30857ac163193d3857"} Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.583267 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" event={"ID":"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6","Type":"ContainerStarted","Data":"e4132b3ddfc13f1765cbd4d8f6a797c02ea70c5da037aeea7a90fb80fbf566d7"} Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.583547 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.589598 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-x9sjv" event={"ID":"72546cbc-3499-4110-b0e4-58beab7cc8a5","Type":"ContainerStarted","Data":"f1032fb4248d8848aa74a32078e94558edcfccf5692ba81381e6264aab175df3"} Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.591110 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-x9sjv" Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.591157 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.591541 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.593900 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:13:06 crc kubenswrapper[4867]: E0214 04:13:06.599711 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-gvh7q" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.613086 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" podStartSLOduration=29.613070936 podStartE2EDuration="29.613070936s" podCreationTimestamp="2026-02-14 04:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:13:06.611555087 +0000 UTC m=+218.692492401" watchObservedRunningTime="2026-02-14 04:13:06.613070936 +0000 UTC m=+218.694008250" Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.614712 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" podStartSLOduration=29.614705187 podStartE2EDuration="29.614705187s" podCreationTimestamp="2026-02-14 04:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:13:06.597644234 +0000 UTC m=+218.678581568" watchObservedRunningTime="2026-02-14 04:13:06.614705187 +0000 UTC m=+218.695642501" Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.652569 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=24.652547599000002 podStartE2EDuration="24.652547599s" podCreationTimestamp="2026-02-14 04:12:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:13:06.652490218 +0000 UTC m=+218.733427532" watchObservedRunningTime="2026-02-14 04:13:06.652547599 +0000 UTC m=+218.733484923" Feb 14 04:13:06 crc kubenswrapper[4867]: E0214 04:13:06.780057 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 14 04:13:06 crc kubenswrapper[4867]: E0214 04:13:06.780599 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ztsqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-s8hwg_openshift-marketplace(1f7707be-b4dc-47c7-8a74-bc46399acd36): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 04:13:06 crc kubenswrapper[4867]: E0214 04:13:06.781873 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-s8hwg" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.919703 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.919761 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.919798 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.919862 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.941278 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:13:06 crc kubenswrapper[4867]: I0214 04:13:06.965579 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=29.965563744 podStartE2EDuration="29.965563744s" podCreationTimestamp="2026-02-14 04:12:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:13:06.715776486 +0000 UTC m=+218.796713800" watchObservedRunningTime="2026-02-14 04:13:06.965563744 +0000 UTC m=+219.046501058" Feb 14 04:13:07 crc kubenswrapper[4867]: I0214 04:13:07.606015 4867 generic.go:334] "Generic (PLEG): container finished" podID="3f1bacbd-3b75-4814-83cc-1569cbbf36bb" containerID="1e9a67aeecba2c81f700639b7605c079fdc674717d231a63f286887b5989d232" exitCode=0 Feb 14 04:13:07 crc kubenswrapper[4867]: I0214 04:13:07.606259 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3f1bacbd-3b75-4814-83cc-1569cbbf36bb","Type":"ContainerDied","Data":"1e9a67aeecba2c81f700639b7605c079fdc674717d231a63f286887b5989d232"} Feb 14 04:13:07 crc kubenswrapper[4867]: I0214 04:13:07.609404 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4b6k5" event={"ID":"7206174b-645b-4924-8345-d1d4b1a5ec39","Type":"ContainerStarted","Data":"8ca174d87caff1de9590fea61881d6666f195d521ba3dc01cd9c9bdbc3ee5c9c"} Feb 14 04:13:07 crc kubenswrapper[4867]: I0214 04:13:07.611347 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"a1533900ce1e5bb0e6f304c6961b52011041a6df37ce715de5540edb7f995f66"} Feb 14 04:13:07 crc kubenswrapper[4867]: I0214 04:13:07.611854 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:13:07 crc kubenswrapper[4867]: I0214 04:13:07.612236 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:13:07 crc kubenswrapper[4867]: E0214 04:13:07.613026 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-s8hwg" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" Feb 14 04:13:07 crc kubenswrapper[4867]: I0214 04:13:07.675735 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-4b6k5" podStartSLOduration=197.675716793 podStartE2EDuration="3m17.675716793s" podCreationTimestamp="2026-02-14 04:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:13:07.65592346 +0000 UTC m=+219.736860794" watchObservedRunningTime="2026-02-14 04:13:07.675716793 +0000 UTC m=+219.756654107" Feb 14 04:13:08 crc kubenswrapper[4867]: I0214 04:13:08.617340 4867 generic.go:334] "Generic (PLEG): container finished" podID="21ce8d91-a436-4fe6-b5fd-1988e588ded8" containerID="1874a10e5b67d2e6bb513881074d5bce2e31adc733159821fa403df5a755105e" exitCode=0 Feb 14 04:13:08 crc kubenswrapper[4867]: I0214 04:13:08.617444 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9vq9" event={"ID":"21ce8d91-a436-4fe6-b5fd-1988e588ded8","Type":"ContainerDied","Data":"1874a10e5b67d2e6bb513881074d5bce2e31adc733159821fa403df5a755105e"} Feb 14 04:13:08 crc kubenswrapper[4867]: I0214 04:13:08.619610 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:13:08 crc kubenswrapper[4867]: I0214 04:13:08.619682 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:13:08 crc kubenswrapper[4867]: I0214 04:13:08.933810 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 04:13:09 crc kubenswrapper[4867]: I0214 04:13:09.060754 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f1bacbd-3b75-4814-83cc-1569cbbf36bb-kubelet-dir\") pod \"3f1bacbd-3b75-4814-83cc-1569cbbf36bb\" (UID: \"3f1bacbd-3b75-4814-83cc-1569cbbf36bb\") " Feb 14 04:13:09 crc kubenswrapper[4867]: I0214 04:13:09.060803 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f1bacbd-3b75-4814-83cc-1569cbbf36bb-kube-api-access\") pod \"3f1bacbd-3b75-4814-83cc-1569cbbf36bb\" (UID: \"3f1bacbd-3b75-4814-83cc-1569cbbf36bb\") " Feb 14 04:13:09 crc kubenswrapper[4867]: I0214 04:13:09.060871 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f1bacbd-3b75-4814-83cc-1569cbbf36bb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3f1bacbd-3b75-4814-83cc-1569cbbf36bb" (UID: "3f1bacbd-3b75-4814-83cc-1569cbbf36bb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:13:09 crc kubenswrapper[4867]: I0214 04:13:09.061067 4867 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f1bacbd-3b75-4814-83cc-1569cbbf36bb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 14 04:13:09 crc kubenswrapper[4867]: I0214 04:13:09.066267 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f1bacbd-3b75-4814-83cc-1569cbbf36bb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3f1bacbd-3b75-4814-83cc-1569cbbf36bb" (UID: "3f1bacbd-3b75-4814-83cc-1569cbbf36bb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:13:09 crc kubenswrapper[4867]: I0214 04:13:09.161674 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f1bacbd-3b75-4814-83cc-1569cbbf36bb-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 04:13:09 crc kubenswrapper[4867]: I0214 04:13:09.626179 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3f1bacbd-3b75-4814-83cc-1569cbbf36bb","Type":"ContainerDied","Data":"eb0e14a1c0feea853b78fcf6336be6031c436d6dec3e3102e524efe8fc4064cc"} Feb 14 04:13:09 crc kubenswrapper[4867]: I0214 04:13:09.626222 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb0e14a1c0feea853b78fcf6336be6031c436d6dec3e3102e524efe8fc4064cc" Feb 14 04:13:09 crc kubenswrapper[4867]: I0214 04:13:09.626313 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 14 04:13:10 crc kubenswrapper[4867]: I0214 04:13:10.632388 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9vq9" event={"ID":"21ce8d91-a436-4fe6-b5fd-1988e588ded8","Type":"ContainerStarted","Data":"59d20d766b1edd844acfd10fcac06c637f2be95f509a76f1883642ffba8f4bdb"} Feb 14 04:13:10 crc kubenswrapper[4867]: I0214 04:13:10.651839 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n9vq9" podStartSLOduration=3.6370444280000003 podStartE2EDuration="1m10.651824938s" podCreationTimestamp="2026-02-14 04:12:00 +0000 UTC" firstStartedPulling="2026-02-14 04:12:03.002679266 +0000 UTC m=+155.083616580" lastFinishedPulling="2026-02-14 04:13:10.017459776 +0000 UTC m=+222.098397090" observedRunningTime="2026-02-14 04:13:10.649802297 +0000 UTC m=+222.730739611" watchObservedRunningTime="2026-02-14 04:13:10.651824938 +0000 UTC m=+222.732762252" Feb 14 04:13:11 crc kubenswrapper[4867]: I0214 04:13:11.305930 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:13:11 crc kubenswrapper[4867]: I0214 04:13:11.306160 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:13:11 crc kubenswrapper[4867]: I0214 04:13:11.643998 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mz22" event={"ID":"4cf2e46b-a553-4b29-b6f2-02072b8660d9","Type":"ContainerStarted","Data":"07dc86f27711b42c0f0c70d02bf821bf6e645caa1d382d2a371675cf0f568e78"} Feb 14 04:13:12 crc kubenswrapper[4867]: I0214 04:13:12.946503 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n9vq9" podUID="21ce8d91-a436-4fe6-b5fd-1988e588ded8" containerName="registry-server" probeResult="failure" output=< Feb 14 04:13:12 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 04:13:12 crc kubenswrapper[4867]: > Feb 14 04:13:13 crc kubenswrapper[4867]: I0214 04:13:13.659618 4867 generic.go:334] "Generic (PLEG): container finished" podID="4cf2e46b-a553-4b29-b6f2-02072b8660d9" containerID="07dc86f27711b42c0f0c70d02bf821bf6e645caa1d382d2a371675cf0f568e78" exitCode=0 Feb 14 04:13:13 crc kubenswrapper[4867]: I0214 04:13:13.659657 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mz22" event={"ID":"4cf2e46b-a553-4b29-b6f2-02072b8660d9","Type":"ContainerDied","Data":"07dc86f27711b42c0f0c70d02bf821bf6e645caa1d382d2a371675cf0f568e78"} Feb 14 04:13:16 crc kubenswrapper[4867]: I0214 04:13:16.919155 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:13:16 crc kubenswrapper[4867]: I0214 04:13:16.919686 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:13:16 crc kubenswrapper[4867]: I0214 04:13:16.919158 4867 patch_prober.go:28] interesting pod/downloads-7954f5f757-x9sjv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Feb 14 04:13:16 crc kubenswrapper[4867]: I0214 04:13:16.919808 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-x9sjv" podUID="72546cbc-3499-4110-b0e4-58beab7cc8a5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Feb 14 04:13:17 crc kubenswrapper[4867]: I0214 04:13:17.681701 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mz22" event={"ID":"4cf2e46b-a553-4b29-b6f2-02072b8660d9","Type":"ContainerStarted","Data":"c2877fef377b8448495213f1ba7610d513464667dbd0985d720e7b4e3414f0c3"} Feb 14 04:13:17 crc kubenswrapper[4867]: I0214 04:13:17.708215 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5mz22" podStartSLOduration=4.296732424 podStartE2EDuration="1m20.708195191s" podCreationTimestamp="2026-02-14 04:11:57 +0000 UTC" firstStartedPulling="2026-02-14 04:12:00.842659324 +0000 UTC m=+152.923596638" lastFinishedPulling="2026-02-14 04:13:17.254122101 +0000 UTC m=+229.335059405" observedRunningTime="2026-02-14 04:13:17.701934102 +0000 UTC m=+229.782871426" watchObservedRunningTime="2026-02-14 04:13:17.708195191 +0000 UTC m=+229.789132525" Feb 14 04:13:18 crc kubenswrapper[4867]: I0214 04:13:18.394108 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:13:18 crc kubenswrapper[4867]: I0214 04:13:18.394459 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:13:18 crc kubenswrapper[4867]: I0214 04:13:18.690243 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jc878" event={"ID":"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab","Type":"ContainerStarted","Data":"a9a5891bbec4b4da6c9ef36e2dd93f2b54465511a9b15a7d390a7176eb2c82b4"} Feb 14 04:13:19 crc kubenswrapper[4867]: I0214 04:13:19.482093 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-5mz22" podUID="4cf2e46b-a553-4b29-b6f2-02072b8660d9" containerName="registry-server" probeResult="failure" output=< Feb 14 04:13:19 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 04:13:19 crc kubenswrapper[4867]: > Feb 14 04:13:19 crc kubenswrapper[4867]: I0214 04:13:19.735196 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x4khs" event={"ID":"f27f899c-e2d8-4601-9a36-4582192436b7","Type":"ContainerStarted","Data":"fc1f0bd8f7009d70b8d79a2619856a470a226829cf0b6491da5a920f404a7708"} Feb 14 04:13:20 crc kubenswrapper[4867]: I0214 04:13:20.753703 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2cjxf" event={"ID":"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6","Type":"ContainerStarted","Data":"85287bd98780c8d28545ae3a7b154f6ba33f7e022b07f74e2ecc3b8f424c43cb"} Feb 14 04:13:21 crc kubenswrapper[4867]: I0214 04:13:21.425425 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:13:21 crc kubenswrapper[4867]: I0214 04:13:21.473911 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:13:21 crc kubenswrapper[4867]: I0214 04:13:21.762073 4867 generic.go:334] "Generic (PLEG): container finished" podID="fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" containerID="a9a5891bbec4b4da6c9ef36e2dd93f2b54465511a9b15a7d390a7176eb2c82b4" exitCode=0 Feb 14 04:13:21 crc kubenswrapper[4867]: I0214 04:13:21.762135 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jc878" event={"ID":"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab","Type":"ContainerDied","Data":"a9a5891bbec4b4da6c9ef36e2dd93f2b54465511a9b15a7d390a7176eb2c82b4"} Feb 14 04:13:21 crc kubenswrapper[4867]: I0214 04:13:21.764790 4867 generic.go:334] "Generic (PLEG): container finished" podID="f27f899c-e2d8-4601-9a36-4582192436b7" containerID="fc1f0bd8f7009d70b8d79a2619856a470a226829cf0b6491da5a920f404a7708" exitCode=0 Feb 14 04:13:21 crc kubenswrapper[4867]: I0214 04:13:21.764835 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x4khs" event={"ID":"f27f899c-e2d8-4601-9a36-4582192436b7","Type":"ContainerDied","Data":"fc1f0bd8f7009d70b8d79a2619856a470a226829cf0b6491da5a920f404a7708"} Feb 14 04:13:21 crc kubenswrapper[4867]: I0214 04:13:21.770120 4867 generic.go:334] "Generic (PLEG): container finished" podID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" containerID="85287bd98780c8d28545ae3a7b154f6ba33f7e022b07f74e2ecc3b8f424c43cb" exitCode=0 Feb 14 04:13:21 crc kubenswrapper[4867]: I0214 04:13:21.770453 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2cjxf" event={"ID":"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6","Type":"ContainerDied","Data":"85287bd98780c8d28545ae3a7b154f6ba33f7e022b07f74e2ecc3b8f424c43cb"} Feb 14 04:13:26 crc kubenswrapper[4867]: I0214 04:13:26.923887 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-x9sjv" Feb 14 04:13:28 crc kubenswrapper[4867]: I0214 04:13:28.446564 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:13:28 crc kubenswrapper[4867]: I0214 04:13:28.499404 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:13:34 crc kubenswrapper[4867]: I0214 04:13:34.834382 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2cjxf" event={"ID":"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6","Type":"ContainerStarted","Data":"118aa202ac601ceca70d20070e2eef726e85bdc481297be9216162c3fbf1dc32"} Feb 14 04:13:34 crc kubenswrapper[4867]: I0214 04:13:34.837225 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8hwg" event={"ID":"1f7707be-b4dc-47c7-8a74-bc46399acd36","Type":"ContainerStarted","Data":"984fdfc85b05392cc72c5c84de4475acfa58af432c2af35475c4d0530104a422"} Feb 14 04:13:34 crc kubenswrapper[4867]: I0214 04:13:34.843998 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvh7q" event={"ID":"2e834244-05c0-4e48-9e2a-7c69cf930951","Type":"ContainerStarted","Data":"7e50404d86dfa5abaa30ac013da7f00871fba46895499f9f17afba5a612ece63"} Feb 14 04:13:34 crc kubenswrapper[4867]: I0214 04:13:34.845940 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vs6k" event={"ID":"b6d1c1c6-899d-4220-8f80-defae4ba56f0","Type":"ContainerStarted","Data":"c9315920968c94ddf5477e0bdd603b5b8e9cbf807eefba671df93e2d03e2c2f6"} Feb 14 04:13:34 crc kubenswrapper[4867]: I0214 04:13:34.847699 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jc878" event={"ID":"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab","Type":"ContainerStarted","Data":"60ffc454fecb09f395b2cdd3ab6338fbcdb34866e0895ad196ee1967f60209e8"} Feb 14 04:13:34 crc kubenswrapper[4867]: I0214 04:13:34.856995 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2cjxf" podStartSLOduration=4.727915658 podStartE2EDuration="1m36.856979987s" podCreationTimestamp="2026-02-14 04:11:58 +0000 UTC" firstStartedPulling="2026-02-14 04:12:01.950242563 +0000 UTC m=+154.031179877" lastFinishedPulling="2026-02-14 04:13:34.079306892 +0000 UTC m=+246.160244206" observedRunningTime="2026-02-14 04:13:34.854573566 +0000 UTC m=+246.935510890" watchObservedRunningTime="2026-02-14 04:13:34.856979987 +0000 UTC m=+246.937917301" Feb 14 04:13:34 crc kubenswrapper[4867]: I0214 04:13:34.862377 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x4khs" event={"ID":"f27f899c-e2d8-4601-9a36-4582192436b7","Type":"ContainerStarted","Data":"ce8e3a0d75f26f463ddb328420cf33514070ab3b090d2f2c0466cda65d982931"} Feb 14 04:13:34 crc kubenswrapper[4867]: I0214 04:13:34.922136 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jc878" podStartSLOduration=2.852256427 podStartE2EDuration="1m33.922118222s" podCreationTimestamp="2026-02-14 04:12:01 +0000 UTC" firstStartedPulling="2026-02-14 04:12:02.983522458 +0000 UTC m=+155.064459772" lastFinishedPulling="2026-02-14 04:13:34.053384233 +0000 UTC m=+246.134321567" observedRunningTime="2026-02-14 04:13:34.906977797 +0000 UTC m=+246.987915111" watchObservedRunningTime="2026-02-14 04:13:34.922118222 +0000 UTC m=+247.003055536" Feb 14 04:13:34 crc kubenswrapper[4867]: I0214 04:13:34.968050 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x4khs" podStartSLOduration=3.794932676 podStartE2EDuration="1m36.968030719s" podCreationTimestamp="2026-02-14 04:11:58 +0000 UTC" firstStartedPulling="2026-02-14 04:12:00.853233573 +0000 UTC m=+152.934170887" lastFinishedPulling="2026-02-14 04:13:34.026331616 +0000 UTC m=+246.107268930" observedRunningTime="2026-02-14 04:13:34.966396357 +0000 UTC m=+247.047333671" watchObservedRunningTime="2026-02-14 04:13:34.968030719 +0000 UTC m=+247.048968033" Feb 14 04:13:35 crc kubenswrapper[4867]: I0214 04:13:35.868599 4867 generic.go:334] "Generic (PLEG): container finished" podID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" containerID="c9315920968c94ddf5477e0bdd603b5b8e9cbf807eefba671df93e2d03e2c2f6" exitCode=0 Feb 14 04:13:35 crc kubenswrapper[4867]: I0214 04:13:35.868671 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vs6k" event={"ID":"b6d1c1c6-899d-4220-8f80-defae4ba56f0","Type":"ContainerDied","Data":"c9315920968c94ddf5477e0bdd603b5b8e9cbf807eefba671df93e2d03e2c2f6"} Feb 14 04:13:35 crc kubenswrapper[4867]: I0214 04:13:35.871447 4867 generic.go:334] "Generic (PLEG): container finished" podID="1f7707be-b4dc-47c7-8a74-bc46399acd36" containerID="984fdfc85b05392cc72c5c84de4475acfa58af432c2af35475c4d0530104a422" exitCode=0 Feb 14 04:13:35 crc kubenswrapper[4867]: I0214 04:13:35.871535 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8hwg" event={"ID":"1f7707be-b4dc-47c7-8a74-bc46399acd36","Type":"ContainerDied","Data":"984fdfc85b05392cc72c5c84de4475acfa58af432c2af35475c4d0530104a422"} Feb 14 04:13:35 crc kubenswrapper[4867]: I0214 04:13:35.873855 4867 generic.go:334] "Generic (PLEG): container finished" podID="2e834244-05c0-4e48-9e2a-7c69cf930951" containerID="7e50404d86dfa5abaa30ac013da7f00871fba46895499f9f17afba5a612ece63" exitCode=0 Feb 14 04:13:35 crc kubenswrapper[4867]: I0214 04:13:35.873920 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvh7q" event={"ID":"2e834244-05c0-4e48-9e2a-7c69cf930951","Type":"ContainerDied","Data":"7e50404d86dfa5abaa30ac013da7f00871fba46895499f9f17afba5a612ece63"} Feb 14 04:13:36 crc kubenswrapper[4867]: I0214 04:13:36.881652 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8hwg" event={"ID":"1f7707be-b4dc-47c7-8a74-bc46399acd36","Type":"ContainerStarted","Data":"7fb020ae5c17769ac38af08639b438690daf523e3453b2d4607be04e3eed31f6"} Feb 14 04:13:36 crc kubenswrapper[4867]: I0214 04:13:36.884144 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvh7q" event={"ID":"2e834244-05c0-4e48-9e2a-7c69cf930951","Type":"ContainerStarted","Data":"d4d72b2ebbd17189ee349d8b4d6304ac52d50866cfe1895c6576cff0ec95c46e"} Feb 14 04:13:36 crc kubenswrapper[4867]: I0214 04:13:36.886499 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vs6k" event={"ID":"b6d1c1c6-899d-4220-8f80-defae4ba56f0","Type":"ContainerStarted","Data":"fde717817968c374eed933a0aba80886281d640f0cd7b277b1cbd496e7430898"} Feb 14 04:13:36 crc kubenswrapper[4867]: I0214 04:13:36.910038 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s8hwg" podStartSLOduration=2.605449596 podStartE2EDuration="1m36.910021714s" podCreationTimestamp="2026-02-14 04:12:00 +0000 UTC" firstStartedPulling="2026-02-14 04:12:01.968048537 +0000 UTC m=+154.048985851" lastFinishedPulling="2026-02-14 04:13:36.272620655 +0000 UTC m=+248.353557969" observedRunningTime="2026-02-14 04:13:36.90712359 +0000 UTC m=+248.988060904" watchObservedRunningTime="2026-02-14 04:13:36.910021714 +0000 UTC m=+248.990959038" Feb 14 04:13:36 crc kubenswrapper[4867]: I0214 04:13:36.929614 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8vs6k" podStartSLOduration=4.423403619 podStartE2EDuration="1m39.929595721s" podCreationTimestamp="2026-02-14 04:11:57 +0000 UTC" firstStartedPulling="2026-02-14 04:12:00.913850629 +0000 UTC m=+152.994787943" lastFinishedPulling="2026-02-14 04:13:36.420042731 +0000 UTC m=+248.500980045" observedRunningTime="2026-02-14 04:13:36.926043411 +0000 UTC m=+249.006980745" watchObservedRunningTime="2026-02-14 04:13:36.929595721 +0000 UTC m=+249.010533035" Feb 14 04:13:36 crc kubenswrapper[4867]: I0214 04:13:36.946634 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gvh7q" podStartSLOduration=3.548929708 podStartE2EDuration="1m37.946616824s" podCreationTimestamp="2026-02-14 04:11:59 +0000 UTC" firstStartedPulling="2026-02-14 04:12:01.945704687 +0000 UTC m=+154.026642011" lastFinishedPulling="2026-02-14 04:13:36.343391813 +0000 UTC m=+248.424329127" observedRunningTime="2026-02-14 04:13:36.944244644 +0000 UTC m=+249.025181958" watchObservedRunningTime="2026-02-14 04:13:36.946616824 +0000 UTC m=+249.027554138" Feb 14 04:13:38 crc kubenswrapper[4867]: I0214 04:13:38.345050 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:13:38 crc kubenswrapper[4867]: I0214 04:13:38.345106 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:13:38 crc kubenswrapper[4867]: I0214 04:13:38.506892 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:13:38 crc kubenswrapper[4867]: I0214 04:13:38.506938 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:13:38 crc kubenswrapper[4867]: I0214 04:13:38.561017 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:13:38 crc kubenswrapper[4867]: I0214 04:13:38.735339 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:13:38 crc kubenswrapper[4867]: I0214 04:13:38.735401 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:13:38 crc kubenswrapper[4867]: I0214 04:13:38.778034 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:13:39 crc kubenswrapper[4867]: I0214 04:13:39.388791 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-8vs6k" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" containerName="registry-server" probeResult="failure" output=< Feb 14 04:13:39 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 04:13:39 crc kubenswrapper[4867]: > Feb 14 04:13:40 crc kubenswrapper[4867]: I0214 04:13:40.103000 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:13:40 crc kubenswrapper[4867]: I0214 04:13:40.103056 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:13:40 crc kubenswrapper[4867]: I0214 04:13:40.141876 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:13:40 crc kubenswrapper[4867]: I0214 04:13:40.688552 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:13:40 crc kubenswrapper[4867]: I0214 04:13:40.688601 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:13:40 crc kubenswrapper[4867]: I0214 04:13:40.728583 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:13:41 crc kubenswrapper[4867]: I0214 04:13:41.818188 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:13:41 crc kubenswrapper[4867]: I0214 04:13:41.818246 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:13:41 crc kubenswrapper[4867]: I0214 04:13:41.863630 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:13:41 crc kubenswrapper[4867]: I0214 04:13:41.948455 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.904388 4867 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 14 04:13:43 crc kubenswrapper[4867]: E0214 04:13:43.905850 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f1bacbd-3b75-4814-83cc-1569cbbf36bb" containerName="pruner" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.905937 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1bacbd-3b75-4814-83cc-1569cbbf36bb" containerName="pruner" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.906104 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f1bacbd-3b75-4814-83cc-1569cbbf36bb" containerName="pruner" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.906474 4867 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.906598 4867 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.906598 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.906813 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe" gracePeriod=15 Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.906832 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48" gracePeriod=15 Feb 14 04:13:43 crc kubenswrapper[4867]: E0214 04:13:43.906977 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.907046 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 14 04:13:43 crc kubenswrapper[4867]: E0214 04:13:43.907111 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.907173 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 14 04:13:43 crc kubenswrapper[4867]: E0214 04:13:43.907243 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.907000 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7" gracePeriod=15 Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.906952 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc" gracePeriod=15 Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.907306 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 14 04:13:43 crc kubenswrapper[4867]: E0214 04:13:43.907405 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.907415 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 14 04:13:43 crc kubenswrapper[4867]: E0214 04:13:43.907426 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.907432 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 14 04:13:43 crc kubenswrapper[4867]: E0214 04:13:43.907448 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.907453 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 14 04:13:43 crc kubenswrapper[4867]: E0214 04:13:43.907464 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.907469 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.906957 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243" gracePeriod=15 Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.907676 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.907690 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.907700 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.907707 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.907716 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.907723 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 14 04:13:43 crc kubenswrapper[4867]: I0214 04:13:43.913389 4867 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.069299 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.069382 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.069401 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.069438 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.069467 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.069545 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.069645 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.069672 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.170351 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.170412 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.170415 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.170450 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.170472 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.170497 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.170546 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.170571 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.170604 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.170581 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.170581 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.170608 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.170651 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.170678 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.170686 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.170709 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.940951 4867 generic.go:334] "Generic (PLEG): container finished" podID="0e717e9c-3ff4-420e-8f69-26044fc5e482" containerID="e88a66dab5c2b34dc63a7059bdf03187c70eb6a356f22173e8d8866831ed9219" exitCode=0 Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.941052 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0e717e9c-3ff4-420e-8f69-26044fc5e482","Type":"ContainerDied","Data":"e88a66dab5c2b34dc63a7059bdf03187c70eb6a356f22173e8d8866831ed9219"} Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.941976 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.943233 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.944361 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.945155 4867 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48" exitCode=0 Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.945177 4867 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7" exitCode=0 Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.945185 4867 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc" exitCode=0 Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.945195 4867 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243" exitCode=2 Feb 14 04:13:44 crc kubenswrapper[4867]: I0214 04:13:44.945226 4867 scope.go:117] "RemoveContainer" containerID="b9a86a9d4bdcb85bed9cc5869d14d5d0dcd8a0e22ad73bcc1a9db45554d0c687" Feb 14 04:13:45 crc kubenswrapper[4867]: E0214 04:13:45.869237 4867 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:45 crc kubenswrapper[4867]: E0214 04:13:45.869582 4867 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:45 crc kubenswrapper[4867]: E0214 04:13:45.870078 4867 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:45 crc kubenswrapper[4867]: E0214 04:13:45.870558 4867 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:45 crc kubenswrapper[4867]: E0214 04:13:45.870918 4867 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:45 crc kubenswrapper[4867]: I0214 04:13:45.870952 4867 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 14 04:13:45 crc kubenswrapper[4867]: E0214 04:13:45.871118 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="200ms" Feb 14 04:13:45 crc kubenswrapper[4867]: I0214 04:13:45.954008 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 14 04:13:46 crc kubenswrapper[4867]: E0214 04:13:46.004247 4867 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.113:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" volumeName="registry-storage" Feb 14 04:13:46 crc kubenswrapper[4867]: E0214 04:13:46.071888 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="400ms" Feb 14 04:13:46 crc kubenswrapper[4867]: I0214 04:13:46.298960 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 14 04:13:46 crc kubenswrapper[4867]: I0214 04:13:46.299624 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:46 crc kubenswrapper[4867]: I0214 04:13:46.400193 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0e717e9c-3ff4-420e-8f69-26044fc5e482-kubelet-dir\") pod \"0e717e9c-3ff4-420e-8f69-26044fc5e482\" (UID: \"0e717e9c-3ff4-420e-8f69-26044fc5e482\") " Feb 14 04:13:46 crc kubenswrapper[4867]: I0214 04:13:46.400290 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0e717e9c-3ff4-420e-8f69-26044fc5e482-var-lock\") pod \"0e717e9c-3ff4-420e-8f69-26044fc5e482\" (UID: \"0e717e9c-3ff4-420e-8f69-26044fc5e482\") " Feb 14 04:13:46 crc kubenswrapper[4867]: I0214 04:13:46.400305 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e717e9c-3ff4-420e-8f69-26044fc5e482-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0e717e9c-3ff4-420e-8f69-26044fc5e482" (UID: "0e717e9c-3ff4-420e-8f69-26044fc5e482"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:13:46 crc kubenswrapper[4867]: I0214 04:13:46.400336 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e717e9c-3ff4-420e-8f69-26044fc5e482-kube-api-access\") pod \"0e717e9c-3ff4-420e-8f69-26044fc5e482\" (UID: \"0e717e9c-3ff4-420e-8f69-26044fc5e482\") " Feb 14 04:13:46 crc kubenswrapper[4867]: I0214 04:13:46.400342 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e717e9c-3ff4-420e-8f69-26044fc5e482-var-lock" (OuterVolumeSpecName: "var-lock") pod "0e717e9c-3ff4-420e-8f69-26044fc5e482" (UID: "0e717e9c-3ff4-420e-8f69-26044fc5e482"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:13:46 crc kubenswrapper[4867]: I0214 04:13:46.400599 4867 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0e717e9c-3ff4-420e-8f69-26044fc5e482-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 14 04:13:46 crc kubenswrapper[4867]: I0214 04:13:46.400612 4867 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0e717e9c-3ff4-420e-8f69-26044fc5e482-var-lock\") on node \"crc\" DevicePath \"\"" Feb 14 04:13:46 crc kubenswrapper[4867]: I0214 04:13:46.406148 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e717e9c-3ff4-420e-8f69-26044fc5e482-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0e717e9c-3ff4-420e-8f69-26044fc5e482" (UID: "0e717e9c-3ff4-420e-8f69-26044fc5e482"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:13:46 crc kubenswrapper[4867]: E0214 04:13:46.473149 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="800ms" Feb 14 04:13:46 crc kubenswrapper[4867]: I0214 04:13:46.502093 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0e717e9c-3ff4-420e-8f69-26044fc5e482-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 14 04:13:46 crc kubenswrapper[4867]: I0214 04:13:46.970711 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0e717e9c-3ff4-420e-8f69-26044fc5e482","Type":"ContainerDied","Data":"798ef9a3e213da1cc192f6e3e40f1dc1868f826121e73963d17a6206d8028438"} Feb 14 04:13:46 crc kubenswrapper[4867]: I0214 04:13:46.970756 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="798ef9a3e213da1cc192f6e3e40f1dc1868f826121e73963d17a6206d8028438" Feb 14 04:13:46 crc kubenswrapper[4867]: I0214 04:13:46.970813 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 14 04:13:46 crc kubenswrapper[4867]: I0214 04:13:46.984355 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:47 crc kubenswrapper[4867]: E0214 04:13:47.274524 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="1.6s" Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.611731 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.612863 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.613361 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.613845 4867 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.715799 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.715955 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.716085 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.715899 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.716351 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.716393 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.818097 4867 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.818135 4867 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.818147 4867 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.978573 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.979318 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.979560 4867 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe" exitCode=0 Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.979660 4867 scope.go:117] "RemoveContainer" containerID="37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48" Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.992292 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.992792 4867 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:47 crc kubenswrapper[4867]: I0214 04:13:47.994002 4867 scope.go:117] "RemoveContainer" containerID="7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.007961 4867 scope.go:117] "RemoveContainer" containerID="44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.022938 4867 scope.go:117] "RemoveContainer" containerID="3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.035573 4867 scope.go:117] "RemoveContainer" containerID="ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.048521 4867 scope.go:117] "RemoveContainer" containerID="6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.064418 4867 scope.go:117] "RemoveContainer" containerID="37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48" Feb 14 04:13:48 crc kubenswrapper[4867]: E0214 04:13:48.064855 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\": container with ID starting with 37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48 not found: ID does not exist" containerID="37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.064899 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48"} err="failed to get container status \"37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\": rpc error: code = NotFound desc = could not find container \"37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48\": container with ID starting with 37c96b250166bcf9c613c7707d9b66c11bbb6292c67d03ed9c9cd8359f466d48 not found: ID does not exist" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.064935 4867 scope.go:117] "RemoveContainer" containerID="7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7" Feb 14 04:13:48 crc kubenswrapper[4867]: E0214 04:13:48.065193 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\": container with ID starting with 7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7 not found: ID does not exist" containerID="7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.065222 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7"} err="failed to get container status \"7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\": rpc error: code = NotFound desc = could not find container \"7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7\": container with ID starting with 7d60b00afe16ba210d6cf3e8edd9c12aef490177b83185e6d74f219cc35efbc7 not found: ID does not exist" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.065238 4867 scope.go:117] "RemoveContainer" containerID="44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc" Feb 14 04:13:48 crc kubenswrapper[4867]: E0214 04:13:48.065569 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\": container with ID starting with 44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc not found: ID does not exist" containerID="44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.065601 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc"} err="failed to get container status \"44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\": rpc error: code = NotFound desc = could not find container \"44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc\": container with ID starting with 44c65a590577c74e672bca804403f159a08ded5ed0e25daf1bef640898c304fc not found: ID does not exist" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.065623 4867 scope.go:117] "RemoveContainer" containerID="3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243" Feb 14 04:13:48 crc kubenswrapper[4867]: E0214 04:13:48.065932 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\": container with ID starting with 3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243 not found: ID does not exist" containerID="3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.065958 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243"} err="failed to get container status \"3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\": rpc error: code = NotFound desc = could not find container \"3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243\": container with ID starting with 3b2c4d9c08ee7188cfea877222707517949a93291dac8409facff18ccd5d9243 not found: ID does not exist" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.065974 4867 scope.go:117] "RemoveContainer" containerID="ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe" Feb 14 04:13:48 crc kubenswrapper[4867]: E0214 04:13:48.066266 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\": container with ID starting with ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe not found: ID does not exist" containerID="ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.066286 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe"} err="failed to get container status \"ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\": rpc error: code = NotFound desc = could not find container \"ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe\": container with ID starting with ff2ac5b982c695cfabb0b045748396477b0076e3f4bd77aedf8140d8d212eefe not found: ID does not exist" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.066299 4867 scope.go:117] "RemoveContainer" containerID="6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302" Feb 14 04:13:48 crc kubenswrapper[4867]: E0214 04:13:48.066493 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\": container with ID starting with 6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302 not found: ID does not exist" containerID="6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.066524 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302"} err="failed to get container status \"6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\": rpc error: code = NotFound desc = could not find container \"6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302\": container with ID starting with 6722b2b41a7d995647733770ac5341e1444fcb4cd966bef0df3cc4f45ae0f302 not found: ID does not exist" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.386353 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.387670 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.388171 4867 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.388731 4867 status_manager.go:851] "Failed to get status for pod" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" pod="openshift-marketplace/community-operators-8vs6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8vs6k\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.438219 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.439381 4867 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.440046 4867 status_manager.go:851] "Failed to get status for pod" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" pod="openshift-marketplace/community-operators-8vs6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8vs6k\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.440581 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.549623 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.550789 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.550983 4867 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.551242 4867 status_manager.go:851] "Failed to get status for pod" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" pod="openshift-marketplace/certified-operators-x4khs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x4khs\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.551563 4867 status_manager.go:851] "Failed to get status for pod" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" pod="openshift-marketplace/community-operators-8vs6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8vs6k\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.777856 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.778711 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.779106 4867 status_manager.go:851] "Failed to get status for pod" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" pod="openshift-marketplace/community-operators-2cjxf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2cjxf\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.779372 4867 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.779673 4867 status_manager.go:851] "Failed to get status for pod" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" pod="openshift-marketplace/certified-operators-x4khs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x4khs\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.779920 4867 status_manager.go:851] "Failed to get status for pod" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" pod="openshift-marketplace/community-operators-8vs6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8vs6k\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:48 crc kubenswrapper[4867]: E0214 04:13:48.876041 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="3.2s" Feb 14 04:13:48 crc kubenswrapper[4867]: E0214 04:13:48.950020 4867 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.113:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.950549 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:48 crc kubenswrapper[4867]: W0214 04:13:48.969664 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-1ce48bf8dbd63206355352f06f78ae103f88293f63e186df20e4c68d1ae58f58 WatchSource:0}: Error finding container 1ce48bf8dbd63206355352f06f78ae103f88293f63e186df20e4c68d1ae58f58: Status 404 returned error can't find the container with id 1ce48bf8dbd63206355352f06f78ae103f88293f63e186df20e4c68d1ae58f58 Feb 14 04:13:48 crc kubenswrapper[4867]: E0214 04:13:48.982652 4867 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.113:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189401b4ad9a96ef openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-14 04:13:48.981778159 +0000 UTC m=+261.062715463,LastTimestamp:2026-02-14 04:13:48.981778159 +0000 UTC m=+261.062715463,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 14 04:13:48 crc kubenswrapper[4867]: I0214 04:13:48.995919 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1ce48bf8dbd63206355352f06f78ae103f88293f63e186df20e4c68d1ae58f58"} Feb 14 04:13:49 crc kubenswrapper[4867]: I0214 04:13:49.001744 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:49 crc kubenswrapper[4867]: I0214 04:13:49.002096 4867 status_manager.go:851] "Failed to get status for pod" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" pod="openshift-marketplace/community-operators-2cjxf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2cjxf\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:49 crc kubenswrapper[4867]: I0214 04:13:49.002368 4867 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:49 crc kubenswrapper[4867]: I0214 04:13:49.002579 4867 status_manager.go:851] "Failed to get status for pod" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" pod="openshift-marketplace/community-operators-8vs6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8vs6k\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:49 crc kubenswrapper[4867]: I0214 04:13:49.002780 4867 status_manager.go:851] "Failed to get status for pod" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" pod="openshift-marketplace/certified-operators-x4khs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x4khs\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:49 crc kubenswrapper[4867]: I0214 04:13:49.010725 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.002254 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e8b6ac2ad40980da7eed4ab19a090dd414cd17e380844b8fe6f7a8d4336ff8cd"} Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.002877 4867 status_manager.go:851] "Failed to get status for pod" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" pod="openshift-marketplace/community-operators-8vs6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8vs6k\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:50 crc kubenswrapper[4867]: E0214 04:13:50.003008 4867 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.113:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.003121 4867 status_manager.go:851] "Failed to get status for pod" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" pod="openshift-marketplace/certified-operators-x4khs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x4khs\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.003721 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.004247 4867 status_manager.go:851] "Failed to get status for pod" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" pod="openshift-marketplace/community-operators-2cjxf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2cjxf\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.147178 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.147642 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.147951 4867 status_manager.go:851] "Failed to get status for pod" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" pod="openshift-marketplace/community-operators-2cjxf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2cjxf\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.148201 4867 status_manager.go:851] "Failed to get status for pod" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" pod="openshift-marketplace/redhat-marketplace-gvh7q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gvh7q\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.148406 4867 status_manager.go:851] "Failed to get status for pod" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" pod="openshift-marketplace/community-operators-8vs6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8vs6k\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.148967 4867 status_manager.go:851] "Failed to get status for pod" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" pod="openshift-marketplace/certified-operators-x4khs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x4khs\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.726751 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.727430 4867 status_manager.go:851] "Failed to get status for pod" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" pod="openshift-marketplace/redhat-marketplace-s8hwg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s8hwg\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.727897 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.728322 4867 status_manager.go:851] "Failed to get status for pod" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" pod="openshift-marketplace/community-operators-2cjxf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2cjxf\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.728815 4867 status_manager.go:851] "Failed to get status for pod" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" pod="openshift-marketplace/redhat-marketplace-gvh7q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gvh7q\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.729382 4867 status_manager.go:851] "Failed to get status for pod" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" pod="openshift-marketplace/community-operators-8vs6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8vs6k\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:50 crc kubenswrapper[4867]: I0214 04:13:50.729684 4867 status_manager.go:851] "Failed to get status for pod" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" pod="openshift-marketplace/certified-operators-x4khs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x4khs\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:51 crc kubenswrapper[4867]: E0214 04:13:51.010214 4867 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.113:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:13:51 crc kubenswrapper[4867]: E0214 04:13:51.635554 4867 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.113:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189401b4ad9a96ef openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-14 04:13:48.981778159 +0000 UTC m=+261.062715463,LastTimestamp:2026-02-14 04:13:48.981778159 +0000 UTC m=+261.062715463,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 14 04:13:52 crc kubenswrapper[4867]: E0214 04:13:52.077111 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="6.4s" Feb 14 04:13:56 crc kubenswrapper[4867]: E0214 04:13:56.938039 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:13:56Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:13:56Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:13:56Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-14T04:13:56Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:56 crc kubenswrapper[4867]: E0214 04:13:56.939124 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:56 crc kubenswrapper[4867]: E0214 04:13:56.939413 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:56 crc kubenswrapper[4867]: E0214 04:13:56.939692 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:56 crc kubenswrapper[4867]: E0214 04:13:56.939881 4867 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:56 crc kubenswrapper[4867]: E0214 04:13:56.939898 4867 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 14 04:13:58 crc kubenswrapper[4867]: I0214 04:13:58.058403 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 14 04:13:58 crc kubenswrapper[4867]: I0214 04:13:58.058502 4867 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a" exitCode=1 Feb 14 04:13:58 crc kubenswrapper[4867]: I0214 04:13:58.058627 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a"} Feb 14 04:13:58 crc kubenswrapper[4867]: I0214 04:13:58.059473 4867 scope.go:117] "RemoveContainer" containerID="898133696f8478fcb41fba24d15e056570cab68af53a559cb642724dff51617a" Feb 14 04:13:58 crc kubenswrapper[4867]: I0214 04:13:58.059611 4867 status_manager.go:851] "Failed to get status for pod" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" pod="openshift-marketplace/redhat-marketplace-s8hwg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s8hwg\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:58 crc kubenswrapper[4867]: I0214 04:13:58.060803 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:58 crc kubenswrapper[4867]: I0214 04:13:58.061282 4867 status_manager.go:851] "Failed to get status for pod" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" pod="openshift-marketplace/community-operators-2cjxf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2cjxf\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:58 crc kubenswrapper[4867]: I0214 04:13:58.061617 4867 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:58 crc kubenswrapper[4867]: I0214 04:13:58.061914 4867 status_manager.go:851] "Failed to get status for pod" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" pod="openshift-marketplace/redhat-marketplace-gvh7q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gvh7q\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:58 crc kubenswrapper[4867]: I0214 04:13:58.063486 4867 status_manager.go:851] "Failed to get status for pod" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" pod="openshift-marketplace/community-operators-8vs6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8vs6k\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:58 crc kubenswrapper[4867]: I0214 04:13:58.063820 4867 status_manager.go:851] "Failed to get status for pod" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" pod="openshift-marketplace/certified-operators-x4khs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x4khs\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:58 crc kubenswrapper[4867]: E0214 04:13:58.477601 4867 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.113:6443: connect: connection refused" interval="7s" Feb 14 04:13:58 crc kubenswrapper[4867]: I0214 04:13:58.996525 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:13:58 crc kubenswrapper[4867]: I0214 04:13:58.999775 4867 status_manager.go:851] "Failed to get status for pod" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" pod="openshift-marketplace/community-operators-2cjxf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2cjxf\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.000229 4867 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.000716 4867 status_manager.go:851] "Failed to get status for pod" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" pod="openshift-marketplace/redhat-marketplace-gvh7q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gvh7q\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.001024 4867 status_manager.go:851] "Failed to get status for pod" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" pod="openshift-marketplace/community-operators-8vs6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8vs6k\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.001266 4867 status_manager.go:851] "Failed to get status for pod" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" pod="openshift-marketplace/certified-operators-x4khs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x4khs\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.001567 4867 status_manager.go:851] "Failed to get status for pod" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" pod="openshift-marketplace/redhat-marketplace-s8hwg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s8hwg\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.001830 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.002178 4867 status_manager.go:851] "Failed to get status for pod" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" pod="openshift-marketplace/community-operators-2cjxf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2cjxf\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.002388 4867 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.002707 4867 status_manager.go:851] "Failed to get status for pod" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" pod="openshift-marketplace/redhat-marketplace-gvh7q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gvh7q\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.003165 4867 status_manager.go:851] "Failed to get status for pod" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" pod="openshift-marketplace/community-operators-8vs6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8vs6k\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.003463 4867 status_manager.go:851] "Failed to get status for pod" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" pod="openshift-marketplace/certified-operators-x4khs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x4khs\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.003764 4867 status_manager.go:851] "Failed to get status for pod" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" pod="openshift-marketplace/redhat-marketplace-s8hwg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s8hwg\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.004000 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.011158 4867 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b5aa8290-4924-4bc2-bd8e-576e53fa4216" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.011185 4867 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b5aa8290-4924-4bc2-bd8e-576e53fa4216" Feb 14 04:13:59 crc kubenswrapper[4867]: E0214 04:13:59.011564 4867 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.012031 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:13:59 crc kubenswrapper[4867]: W0214 04:13:59.032657 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-d19d83807e2c60c9d28be55d6ab831653f92c3e0eb1dcbee3a2f8da2c22a4a83 WatchSource:0}: Error finding container d19d83807e2c60c9d28be55d6ab831653f92c3e0eb1dcbee3a2f8da2c22a4a83: Status 404 returned error can't find the container with id d19d83807e2c60c9d28be55d6ab831653f92c3e0eb1dcbee3a2f8da2c22a4a83 Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.066971 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.067050 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"017a857e2f79c693f0cb46747dd0950cd029e8ac2d878ddd91749e9ab1131b12"} Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.067928 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.067995 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d19d83807e2c60c9d28be55d6ab831653f92c3e0eb1dcbee3a2f8da2c22a4a83"} Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.068187 4867 status_manager.go:851] "Failed to get status for pod" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" pod="openshift-marketplace/community-operators-2cjxf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2cjxf\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.068401 4867 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.068623 4867 status_manager.go:851] "Failed to get status for pod" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" pod="openshift-marketplace/redhat-marketplace-gvh7q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gvh7q\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.068824 4867 status_manager.go:851] "Failed to get status for pod" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" pod="openshift-marketplace/community-operators-8vs6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8vs6k\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.069000 4867 status_manager.go:851] "Failed to get status for pod" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" pod="openshift-marketplace/certified-operators-x4khs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x4khs\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:13:59 crc kubenswrapper[4867]: I0214 04:13:59.069187 4867 status_manager.go:851] "Failed to get status for pod" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" pod="openshift-marketplace/redhat-marketplace-s8hwg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s8hwg\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:14:00 crc kubenswrapper[4867]: I0214 04:14:00.073598 4867 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="825c6dd04856560083774b31efb866a033b44ccbf051e38f178b6b74973b2388" exitCode=0 Feb 14 04:14:00 crc kubenswrapper[4867]: I0214 04:14:00.073651 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"825c6dd04856560083774b31efb866a033b44ccbf051e38f178b6b74973b2388"} Feb 14 04:14:00 crc kubenswrapper[4867]: I0214 04:14:00.073917 4867 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b5aa8290-4924-4bc2-bd8e-576e53fa4216" Feb 14 04:14:00 crc kubenswrapper[4867]: I0214 04:14:00.073942 4867 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b5aa8290-4924-4bc2-bd8e-576e53fa4216" Feb 14 04:14:00 crc kubenswrapper[4867]: I0214 04:14:00.074461 4867 status_manager.go:851] "Failed to get status for pod" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" pod="openshift-marketplace/community-operators-8vs6k" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8vs6k\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:14:00 crc kubenswrapper[4867]: E0214 04:14:00.074580 4867 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:14:00 crc kubenswrapper[4867]: I0214 04:14:00.074774 4867 status_manager.go:851] "Failed to get status for pod" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" pod="openshift-marketplace/certified-operators-x4khs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-x4khs\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:14:00 crc kubenswrapper[4867]: I0214 04:14:00.075101 4867 status_manager.go:851] "Failed to get status for pod" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" pod="openshift-marketplace/redhat-marketplace-s8hwg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-s8hwg\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:14:00 crc kubenswrapper[4867]: I0214 04:14:00.075388 4867 status_manager.go:851] "Failed to get status for pod" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:14:00 crc kubenswrapper[4867]: I0214 04:14:00.075781 4867 status_manager.go:851] "Failed to get status for pod" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" pod="openshift-marketplace/community-operators-2cjxf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2cjxf\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:14:00 crc kubenswrapper[4867]: I0214 04:14:00.076209 4867 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:14:00 crc kubenswrapper[4867]: I0214 04:14:00.076571 4867 status_manager.go:851] "Failed to get status for pod" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" pod="openshift-marketplace/redhat-marketplace-gvh7q" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-gvh7q\": dial tcp 38.102.83.113:6443: connect: connection refused" Feb 14 04:14:01 crc kubenswrapper[4867]: I0214 04:14:01.085127 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fc896aad0a51feb240844629a8a04a80cdcc2164b3884b8194232f3e137bf9b8"} Feb 14 04:14:01 crc kubenswrapper[4867]: I0214 04:14:01.085705 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"34ecbcdcfe7caeb94d120d8ba76d3e82a5981b2cc4bc85eac4dcf4f90d72eee4"} Feb 14 04:14:02 crc kubenswrapper[4867]: I0214 04:14:02.094009 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fc520929f005a2ab20b1521b62b3e23f3a20f05efaf0e71119e639b949a971fe"} Feb 14 04:14:02 crc kubenswrapper[4867]: I0214 04:14:02.094308 4867 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b5aa8290-4924-4bc2-bd8e-576e53fa4216" Feb 14 04:14:02 crc kubenswrapper[4867]: I0214 04:14:02.094332 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:14:02 crc kubenswrapper[4867]: I0214 04:14:02.094337 4867 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b5aa8290-4924-4bc2-bd8e-576e53fa4216" Feb 14 04:14:02 crc kubenswrapper[4867]: I0214 04:14:02.094344 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8bbd15c45b3dd04ad68f75e71e476e2ae097893c6bd36b7b144f4f60be34b421"} Feb 14 04:14:02 crc kubenswrapper[4867]: I0214 04:14:02.094356 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8c641dba8692bb0b286320904a6b471e83001fc1bae562caab5c83019ed9c0c9"} Feb 14 04:14:04 crc kubenswrapper[4867]: I0214 04:14:04.012633 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:14:04 crc kubenswrapper[4867]: I0214 04:14:04.012673 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:14:04 crc kubenswrapper[4867]: I0214 04:14:04.016886 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:14:04 crc kubenswrapper[4867]: I0214 04:14:04.038136 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:14:04 crc kubenswrapper[4867]: I0214 04:14:04.182788 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:14:04 crc kubenswrapper[4867]: I0214 04:14:04.186310 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:14:07 crc kubenswrapper[4867]: I0214 04:14:07.129368 4867 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:14:08 crc kubenswrapper[4867]: I0214 04:14:08.124580 4867 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b5aa8290-4924-4bc2-bd8e-576e53fa4216" Feb 14 04:14:08 crc kubenswrapper[4867]: I0214 04:14:08.124640 4867 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b5aa8290-4924-4bc2-bd8e-576e53fa4216" Feb 14 04:14:08 crc kubenswrapper[4867]: I0214 04:14:08.128909 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:14:09 crc kubenswrapper[4867]: I0214 04:14:09.032344 4867 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="531e3d0c-a640-414e-8d3e-3370088f5d13" Feb 14 04:14:09 crc kubenswrapper[4867]: I0214 04:14:09.128736 4867 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b5aa8290-4924-4bc2-bd8e-576e53fa4216" Feb 14 04:14:09 crc kubenswrapper[4867]: I0214 04:14:09.128962 4867 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="b5aa8290-4924-4bc2-bd8e-576e53fa4216" Feb 14 04:14:09 crc kubenswrapper[4867]: I0214 04:14:09.132544 4867 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="531e3d0c-a640-414e-8d3e-3370088f5d13" Feb 14 04:14:14 crc kubenswrapper[4867]: I0214 04:14:14.043111 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 14 04:14:16 crc kubenswrapper[4867]: I0214 04:14:16.309628 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 14 04:14:16 crc kubenswrapper[4867]: I0214 04:14:16.872664 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 14 04:14:16 crc kubenswrapper[4867]: I0214 04:14:16.896662 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 14 04:14:17 crc kubenswrapper[4867]: I0214 04:14:17.215292 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 14 04:14:17 crc kubenswrapper[4867]: I0214 04:14:17.250606 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 14 04:14:17 crc kubenswrapper[4867]: I0214 04:14:17.399835 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 14 04:14:17 crc kubenswrapper[4867]: I0214 04:14:17.488207 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 14 04:14:17 crc kubenswrapper[4867]: I0214 04:14:17.524329 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 14 04:14:17 crc kubenswrapper[4867]: I0214 04:14:17.625133 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 14 04:14:17 crc kubenswrapper[4867]: I0214 04:14:17.765350 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 14 04:14:17 crc kubenswrapper[4867]: I0214 04:14:17.781448 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 14 04:14:17 crc kubenswrapper[4867]: I0214 04:14:17.936909 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 14 04:14:17 crc kubenswrapper[4867]: I0214 04:14:17.938115 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 14 04:14:17 crc kubenswrapper[4867]: I0214 04:14:17.969478 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 14 04:14:18 crc kubenswrapper[4867]: I0214 04:14:18.298092 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 14 04:14:18 crc kubenswrapper[4867]: I0214 04:14:18.388033 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 14 04:14:18 crc kubenswrapper[4867]: I0214 04:14:18.589200 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 14 04:14:18 crc kubenswrapper[4867]: I0214 04:14:18.769260 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 14 04:14:18 crc kubenswrapper[4867]: I0214 04:14:18.933378 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 14 04:14:18 crc kubenswrapper[4867]: I0214 04:14:18.941402 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 14 04:14:19 crc kubenswrapper[4867]: I0214 04:14:19.098054 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 14 04:14:19 crc kubenswrapper[4867]: I0214 04:14:19.128964 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 14 04:14:19 crc kubenswrapper[4867]: I0214 04:14:19.263480 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 14 04:14:19 crc kubenswrapper[4867]: I0214 04:14:19.278554 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 14 04:14:19 crc kubenswrapper[4867]: I0214 04:14:19.414595 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 14 04:14:19 crc kubenswrapper[4867]: I0214 04:14:19.686609 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 14 04:14:19 crc kubenswrapper[4867]: I0214 04:14:19.717856 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 14 04:14:19 crc kubenswrapper[4867]: I0214 04:14:19.741842 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 14 04:14:19 crc kubenswrapper[4867]: I0214 04:14:19.766736 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 14 04:14:20 crc kubenswrapper[4867]: I0214 04:14:20.377049 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 14 04:14:20 crc kubenswrapper[4867]: I0214 04:14:20.408189 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 14 04:14:20 crc kubenswrapper[4867]: I0214 04:14:20.697350 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 14 04:14:20 crc kubenswrapper[4867]: I0214 04:14:20.724443 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 14 04:14:20 crc kubenswrapper[4867]: I0214 04:14:20.770094 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 14 04:14:20 crc kubenswrapper[4867]: I0214 04:14:20.812743 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 14 04:14:20 crc kubenswrapper[4867]: I0214 04:14:20.866010 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 14 04:14:20 crc kubenswrapper[4867]: I0214 04:14:20.984753 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.066665 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.069900 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.153964 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.167425 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.177711 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.183535 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.194246 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.209836 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.212192 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.284978 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.294951 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.353865 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.371036 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.405519 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.411302 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.425491 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.499947 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.552371 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.662619 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.719850 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.775048 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.835984 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.841834 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.841974 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.863757 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 14 04:14:21 crc kubenswrapper[4867]: I0214 04:14:21.979470 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.045897 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.061143 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.086120 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.132971 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.154753 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.262346 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.293394 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.319601 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.379001 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.393943 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.552377 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.614955 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.730313 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.784271 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.802711 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.875770 4867 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.880317 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.880369 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.883580 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.886183 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.890081 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.925470 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.925447316 podStartE2EDuration="15.925447316s" podCreationTimestamp="2026-02-14 04:14:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:14:22.904075933 +0000 UTC m=+294.985013247" watchObservedRunningTime="2026-02-14 04:14:22.925447316 +0000 UTC m=+295.006384640" Feb 14 04:14:22 crc kubenswrapper[4867]: I0214 04:14:22.963193 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 14 04:14:23 crc kubenswrapper[4867]: I0214 04:14:23.022216 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 14 04:14:23 crc kubenswrapper[4867]: I0214 04:14:23.058720 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 14 04:14:23 crc kubenswrapper[4867]: I0214 04:14:23.064321 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 14 04:14:23 crc kubenswrapper[4867]: I0214 04:14:23.123024 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 14 04:14:23 crc kubenswrapper[4867]: I0214 04:14:23.129415 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 14 04:14:23 crc kubenswrapper[4867]: I0214 04:14:23.219372 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 14 04:14:23 crc kubenswrapper[4867]: I0214 04:14:23.324176 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 14 04:14:23 crc kubenswrapper[4867]: I0214 04:14:23.375403 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 14 04:14:23 crc kubenswrapper[4867]: I0214 04:14:23.394832 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 14 04:14:23 crc kubenswrapper[4867]: I0214 04:14:23.414064 4867 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 14 04:14:23 crc kubenswrapper[4867]: I0214 04:14:23.645085 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 14 04:14:23 crc kubenswrapper[4867]: I0214 04:14:23.653922 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 14 04:14:23 crc kubenswrapper[4867]: I0214 04:14:23.678018 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 14 04:14:23 crc kubenswrapper[4867]: I0214 04:14:23.949423 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 14 04:14:23 crc kubenswrapper[4867]: I0214 04:14:23.966879 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.008410 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.112222 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.114675 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.141725 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.179200 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.221833 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.228749 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.237597 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.307836 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.345726 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.411985 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.431920 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.437374 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.445911 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.457422 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.529838 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.568892 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.765014 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.823901 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.850726 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.852334 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 14 04:14:24 crc kubenswrapper[4867]: I0214 04:14:24.942046 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.047686 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.087737 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.175223 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.360564 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.396818 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.402124 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.504971 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.624239 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.674203 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.698616 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.789989 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.805420 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.852724 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.873740 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.908525 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.926727 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.961971 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 14 04:14:25 crc kubenswrapper[4867]: I0214 04:14:25.988334 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 14 04:14:26 crc kubenswrapper[4867]: I0214 04:14:26.067759 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 14 04:14:26 crc kubenswrapper[4867]: I0214 04:14:26.207062 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 14 04:14:26 crc kubenswrapper[4867]: I0214 04:14:26.213795 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 14 04:14:26 crc kubenswrapper[4867]: I0214 04:14:26.266579 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 14 04:14:26 crc kubenswrapper[4867]: I0214 04:14:26.294159 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 14 04:14:26 crc kubenswrapper[4867]: I0214 04:14:26.345954 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 14 04:14:26 crc kubenswrapper[4867]: I0214 04:14:26.695822 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 14 04:14:26 crc kubenswrapper[4867]: I0214 04:14:26.699541 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 14 04:14:26 crc kubenswrapper[4867]: I0214 04:14:26.772222 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 14 04:14:26 crc kubenswrapper[4867]: I0214 04:14:26.797297 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 14 04:14:26 crc kubenswrapper[4867]: I0214 04:14:26.936659 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 14 04:14:26 crc kubenswrapper[4867]: I0214 04:14:26.959203 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 14 04:14:26 crc kubenswrapper[4867]: I0214 04:14:26.973667 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.008522 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.023725 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.113883 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.134891 4867 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.158018 4867 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.260907 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.306404 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.309145 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.371541 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.494243 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.569288 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.580036 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.746767 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.754029 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.759865 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.796581 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 14 04:14:27 crc kubenswrapper[4867]: I0214 04:14:27.955819 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.008946 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.064621 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.068425 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.351124 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.386184 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.551236 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.611844 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.640888 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.760029 4867 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.777610 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.786998 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.833347 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.838679 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.854202 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.862838 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.890171 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.916464 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 14 04:14:28 crc kubenswrapper[4867]: I0214 04:14:28.928702 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.129158 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.161775 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.173688 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.200442 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.256490 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.471932 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.636063 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.691083 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.694693 4867 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.700875 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.744753 4867 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.792371 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.803310 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.809128 4867 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.809448 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://e8b6ac2ad40980da7eed4ab19a090dd414cd17e380844b8fe6f7a8d4336ff8cd" gracePeriod=5 Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.850390 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 14 04:14:29 crc kubenswrapper[4867]: I0214 04:14:29.853817 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 14 04:14:30 crc kubenswrapper[4867]: I0214 04:14:30.040793 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 14 04:14:30 crc kubenswrapper[4867]: I0214 04:14:30.132119 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 14 04:14:30 crc kubenswrapper[4867]: I0214 04:14:30.158434 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 14 04:14:30 crc kubenswrapper[4867]: I0214 04:14:30.303367 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 14 04:14:30 crc kubenswrapper[4867]: I0214 04:14:30.372125 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 14 04:14:30 crc kubenswrapper[4867]: I0214 04:14:30.520806 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 14 04:14:30 crc kubenswrapper[4867]: I0214 04:14:30.528382 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 14 04:14:30 crc kubenswrapper[4867]: I0214 04:14:30.573851 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 14 04:14:30 crc kubenswrapper[4867]: I0214 04:14:30.624942 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 14 04:14:30 crc kubenswrapper[4867]: I0214 04:14:30.803008 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 14 04:14:30 crc kubenswrapper[4867]: I0214 04:14:30.815376 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 14 04:14:30 crc kubenswrapper[4867]: I0214 04:14:30.893494 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 14 04:14:30 crc kubenswrapper[4867]: I0214 04:14:30.910617 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 14 04:14:30 crc kubenswrapper[4867]: I0214 04:14:30.987844 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 14 04:14:30 crc kubenswrapper[4867]: I0214 04:14:30.990972 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 14 04:14:31 crc kubenswrapper[4867]: I0214 04:14:31.008692 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 14 04:14:31 crc kubenswrapper[4867]: I0214 04:14:31.164484 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 14 04:14:31 crc kubenswrapper[4867]: I0214 04:14:31.191569 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 14 04:14:31 crc kubenswrapper[4867]: I0214 04:14:31.433940 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 14 04:14:31 crc kubenswrapper[4867]: I0214 04:14:31.525959 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 14 04:14:31 crc kubenswrapper[4867]: I0214 04:14:31.568536 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 14 04:14:31 crc kubenswrapper[4867]: I0214 04:14:31.652137 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 14 04:14:31 crc kubenswrapper[4867]: I0214 04:14:31.721390 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 14 04:14:31 crc kubenswrapper[4867]: I0214 04:14:31.979810 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 14 04:14:32 crc kubenswrapper[4867]: I0214 04:14:32.030785 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 14 04:14:32 crc kubenswrapper[4867]: I0214 04:14:32.122172 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 14 04:14:32 crc kubenswrapper[4867]: I0214 04:14:32.124531 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 14 04:14:32 crc kubenswrapper[4867]: I0214 04:14:32.242532 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 14 04:14:32 crc kubenswrapper[4867]: I0214 04:14:32.541292 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 14 04:14:32 crc kubenswrapper[4867]: I0214 04:14:32.573046 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 14 04:14:32 crc kubenswrapper[4867]: I0214 04:14:32.663372 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 14 04:14:32 crc kubenswrapper[4867]: I0214 04:14:32.717761 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 14 04:14:32 crc kubenswrapper[4867]: I0214 04:14:32.775163 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 14 04:14:32 crc kubenswrapper[4867]: I0214 04:14:32.966608 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 14 04:14:33 crc kubenswrapper[4867]: I0214 04:14:33.307739 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 14 04:14:33 crc kubenswrapper[4867]: I0214 04:14:33.310029 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 14 04:14:33 crc kubenswrapper[4867]: I0214 04:14:33.752191 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.278330 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.278727 4867 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="e8b6ac2ad40980da7eed4ab19a090dd414cd17e380844b8fe6f7a8d4336ff8cd" exitCode=137 Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.395378 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.395462 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.548136 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.548264 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.548278 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.548353 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.548411 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.548479 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.548500 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.548560 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.548631 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.548866 4867 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.548886 4867 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.548898 4867 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.548909 4867 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.567289 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:14:35 crc kubenswrapper[4867]: I0214 04:14:35.650420 4867 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:36 crc kubenswrapper[4867]: I0214 04:14:36.284064 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 14 04:14:36 crc kubenswrapper[4867]: I0214 04:14:36.284365 4867 scope.go:117] "RemoveContainer" containerID="e8b6ac2ad40980da7eed4ab19a090dd414cd17e380844b8fe6f7a8d4336ff8cd" Feb 14 04:14:36 crc kubenswrapper[4867]: I0214 04:14:36.284469 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 14 04:14:37 crc kubenswrapper[4867]: I0214 04:14:37.005659 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.096001 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5mz22"] Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.098162 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5mz22" podUID="4cf2e46b-a553-4b29-b6f2-02072b8660d9" containerName="registry-server" containerID="cri-o://c2877fef377b8448495213f1ba7610d513464667dbd0985d720e7b4e3414f0c3" gracePeriod=30 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.103448 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x4khs"] Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.103936 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x4khs" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" containerName="registry-server" containerID="cri-o://ce8e3a0d75f26f463ddb328420cf33514070ab3b090d2f2c0466cda65d982931" gracePeriod=30 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.117092 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2cjxf"] Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.119168 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2cjxf" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" containerName="registry-server" containerID="cri-o://118aa202ac601ceca70d20070e2eef726e85bdc481297be9216162c3fbf1dc32" gracePeriod=30 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.122894 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8vs6k"] Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.123130 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8vs6k" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" containerName="registry-server" containerID="cri-o://fde717817968c374eed933a0aba80886281d640f0cd7b277b1cbd496e7430898" gracePeriod=30 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.134747 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mkw9h"] Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.135260 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" podUID="0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2" containerName="marketplace-operator" containerID="cri-o://51dd7926e1bc9104319614773b3ee71539ad753d4fb48a3fd7a135d20615274f" gracePeriod=30 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.153150 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvh7q"] Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.153219 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s8hwg"] Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.153231 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jc878"] Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.153437 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gvh7q" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" containerName="registry-server" containerID="cri-o://d4d72b2ebbd17189ee349d8b4d6304ac52d50866cfe1895c6576cff0ec95c46e" gracePeriod=30 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.154013 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s8hwg" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" containerName="registry-server" containerID="cri-o://7fb020ae5c17769ac38af08639b438690daf523e3453b2d4607be04e3eed31f6" gracePeriod=30 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.154196 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jc878" podUID="fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" containerName="registry-server" containerID="cri-o://60ffc454fecb09f395b2cdd3ab6338fbcdb34866e0895ad196ee1967f60209e8" gracePeriod=30 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.159050 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n9vq9"] Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.159361 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n9vq9" podUID="21ce8d91-a436-4fe6-b5fd-1988e588ded8" containerName="registry-server" containerID="cri-o://59d20d766b1edd844acfd10fcac06c637f2be95f509a76f1883642ffba8f4bdb" gracePeriod=30 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.183812 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-p82xp"] Feb 14 04:14:42 crc kubenswrapper[4867]: E0214 04:14:42.184042 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" containerName="installer" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.184054 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" containerName="installer" Feb 14 04:14:42 crc kubenswrapper[4867]: E0214 04:14:42.184069 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.184075 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.184160 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e717e9c-3ff4-420e-8f69-26044fc5e482" containerName="installer" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.184173 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.184902 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.193648 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-p82xp"] Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.336811 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33b576d8-f768-4fd2-895d-7d4ababe8714-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-p82xp\" (UID: \"33b576d8-f768-4fd2-895d-7d4ababe8714\") " pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.336881 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/33b576d8-f768-4fd2-895d-7d4ababe8714-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-p82xp\" (UID: \"33b576d8-f768-4fd2-895d-7d4ababe8714\") " pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.336925 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dp2r\" (UniqueName: \"kubernetes.io/projected/33b576d8-f768-4fd2-895d-7d4ababe8714-kube-api-access-8dp2r\") pod \"marketplace-operator-79b997595-p82xp\" (UID: \"33b576d8-f768-4fd2-895d-7d4ababe8714\") " pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.341683 4867 generic.go:334] "Generic (PLEG): container finished" podID="fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" containerID="60ffc454fecb09f395b2cdd3ab6338fbcdb34866e0895ad196ee1967f60209e8" exitCode=0 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.341753 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jc878" event={"ID":"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab","Type":"ContainerDied","Data":"60ffc454fecb09f395b2cdd3ab6338fbcdb34866e0895ad196ee1967f60209e8"} Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.349160 4867 generic.go:334] "Generic (PLEG): container finished" podID="4cf2e46b-a553-4b29-b6f2-02072b8660d9" containerID="c2877fef377b8448495213f1ba7610d513464667dbd0985d720e7b4e3414f0c3" exitCode=0 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.349304 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mz22" event={"ID":"4cf2e46b-a553-4b29-b6f2-02072b8660d9","Type":"ContainerDied","Data":"c2877fef377b8448495213f1ba7610d513464667dbd0985d720e7b4e3414f0c3"} Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.360123 4867 generic.go:334] "Generic (PLEG): container finished" podID="21ce8d91-a436-4fe6-b5fd-1988e588ded8" containerID="59d20d766b1edd844acfd10fcac06c637f2be95f509a76f1883642ffba8f4bdb" exitCode=0 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.360209 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9vq9" event={"ID":"21ce8d91-a436-4fe6-b5fd-1988e588ded8","Type":"ContainerDied","Data":"59d20d766b1edd844acfd10fcac06c637f2be95f509a76f1883642ffba8f4bdb"} Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.366319 4867 generic.go:334] "Generic (PLEG): container finished" podID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" containerID="118aa202ac601ceca70d20070e2eef726e85bdc481297be9216162c3fbf1dc32" exitCode=0 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.366405 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2cjxf" event={"ID":"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6","Type":"ContainerDied","Data":"118aa202ac601ceca70d20070e2eef726e85bdc481297be9216162c3fbf1dc32"} Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.368456 4867 generic.go:334] "Generic (PLEG): container finished" podID="1f7707be-b4dc-47c7-8a74-bc46399acd36" containerID="7fb020ae5c17769ac38af08639b438690daf523e3453b2d4607be04e3eed31f6" exitCode=0 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.368474 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8hwg" event={"ID":"1f7707be-b4dc-47c7-8a74-bc46399acd36","Type":"ContainerDied","Data":"7fb020ae5c17769ac38af08639b438690daf523e3453b2d4607be04e3eed31f6"} Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.370335 4867 generic.go:334] "Generic (PLEG): container finished" podID="f27f899c-e2d8-4601-9a36-4582192436b7" containerID="ce8e3a0d75f26f463ddb328420cf33514070ab3b090d2f2c0466cda65d982931" exitCode=0 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.370361 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x4khs" event={"ID":"f27f899c-e2d8-4601-9a36-4582192436b7","Type":"ContainerDied","Data":"ce8e3a0d75f26f463ddb328420cf33514070ab3b090d2f2c0466cda65d982931"} Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.372113 4867 generic.go:334] "Generic (PLEG): container finished" podID="2e834244-05c0-4e48-9e2a-7c69cf930951" containerID="d4d72b2ebbd17189ee349d8b4d6304ac52d50866cfe1895c6576cff0ec95c46e" exitCode=0 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.372161 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvh7q" event={"ID":"2e834244-05c0-4e48-9e2a-7c69cf930951","Type":"ContainerDied","Data":"d4d72b2ebbd17189ee349d8b4d6304ac52d50866cfe1895c6576cff0ec95c46e"} Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.373430 4867 generic.go:334] "Generic (PLEG): container finished" podID="0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2" containerID="51dd7926e1bc9104319614773b3ee71539ad753d4fb48a3fd7a135d20615274f" exitCode=0 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.373501 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" event={"ID":"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2","Type":"ContainerDied","Data":"51dd7926e1bc9104319614773b3ee71539ad753d4fb48a3fd7a135d20615274f"} Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.375834 4867 generic.go:334] "Generic (PLEG): container finished" podID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" containerID="fde717817968c374eed933a0aba80886281d640f0cd7b277b1cbd496e7430898" exitCode=0 Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.375859 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vs6k" event={"ID":"b6d1c1c6-899d-4220-8f80-defae4ba56f0","Type":"ContainerDied","Data":"fde717817968c374eed933a0aba80886281d640f0cd7b277b1cbd496e7430898"} Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.438288 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dp2r\" (UniqueName: \"kubernetes.io/projected/33b576d8-f768-4fd2-895d-7d4ababe8714-kube-api-access-8dp2r\") pod \"marketplace-operator-79b997595-p82xp\" (UID: \"33b576d8-f768-4fd2-895d-7d4ababe8714\") " pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.438442 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33b576d8-f768-4fd2-895d-7d4ababe8714-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-p82xp\" (UID: \"33b576d8-f768-4fd2-895d-7d4ababe8714\") " pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.438477 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/33b576d8-f768-4fd2-895d-7d4ababe8714-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-p82xp\" (UID: \"33b576d8-f768-4fd2-895d-7d4ababe8714\") " pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.441069 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33b576d8-f768-4fd2-895d-7d4ababe8714-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-p82xp\" (UID: \"33b576d8-f768-4fd2-895d-7d4ababe8714\") " pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.445223 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/33b576d8-f768-4fd2-895d-7d4ababe8714-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-p82xp\" (UID: \"33b576d8-f768-4fd2-895d-7d4ababe8714\") " pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.456682 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dp2r\" (UniqueName: \"kubernetes.io/projected/33b576d8-f768-4fd2-895d-7d4ababe8714-kube-api-access-8dp2r\") pod \"marketplace-operator-79b997595-p82xp\" (UID: \"33b576d8-f768-4fd2-895d-7d4ababe8714\") " pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.774639 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.778947 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.784378 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.791249 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.801497 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.835488 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.841720 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.842150 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.842385 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.842632 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cf2e46b-a553-4b29-b6f2-02072b8660d9-catalog-content\") pod \"4cf2e46b-a553-4b29-b6f2-02072b8660d9\" (UID: \"4cf2e46b-a553-4b29-b6f2-02072b8660d9\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.842829 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cf2e46b-a553-4b29-b6f2-02072b8660d9-utilities\") pod \"4cf2e46b-a553-4b29-b6f2-02072b8660d9\" (UID: \"4cf2e46b-a553-4b29-b6f2-02072b8660d9\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.842940 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmwl4\" (UniqueName: \"kubernetes.io/projected/4cf2e46b-a553-4b29-b6f2-02072b8660d9-kube-api-access-rmwl4\") pod \"4cf2e46b-a553-4b29-b6f2-02072b8660d9\" (UID: \"4cf2e46b-a553-4b29-b6f2-02072b8660d9\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.847620 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cf2e46b-a553-4b29-b6f2-02072b8660d9-utilities" (OuterVolumeSpecName: "utilities") pod "4cf2e46b-a553-4b29-b6f2-02072b8660d9" (UID: "4cf2e46b-a553-4b29-b6f2-02072b8660d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.854915 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cf2e46b-a553-4b29-b6f2-02072b8660d9-kube-api-access-rmwl4" (OuterVolumeSpecName: "kube-api-access-rmwl4") pod "4cf2e46b-a553-4b29-b6f2-02072b8660d9" (UID: "4cf2e46b-a553-4b29-b6f2-02072b8660d9"). InnerVolumeSpecName "kube-api-access-rmwl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.855043 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.905738 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cf2e46b-a553-4b29-b6f2-02072b8660d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4cf2e46b-a553-4b29-b6f2-02072b8660d9" (UID: "4cf2e46b-a553-4b29-b6f2-02072b8660d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944340 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-utilities\") pod \"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab\" (UID: \"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944384 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzh4n\" (UniqueName: \"kubernetes.io/projected/f27f899c-e2d8-4601-9a36-4582192436b7-kube-api-access-rzh4n\") pod \"f27f899c-e2d8-4601-9a36-4582192436b7\" (UID: \"f27f899c-e2d8-4601-9a36-4582192436b7\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944408 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f7707be-b4dc-47c7-8a74-bc46399acd36-catalog-content\") pod \"1f7707be-b4dc-47c7-8a74-bc46399acd36\" (UID: \"1f7707be-b4dc-47c7-8a74-bc46399acd36\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944431 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-marketplace-operator-metrics\") pod \"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2\" (UID: \"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944460 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmkjt\" (UniqueName: \"kubernetes.io/projected/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-kube-api-access-nmkjt\") pod \"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab\" (UID: \"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944489 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mp526\" (UniqueName: \"kubernetes.io/projected/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-kube-api-access-mp526\") pod \"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6\" (UID: \"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944529 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f7707be-b4dc-47c7-8a74-bc46399acd36-utilities\") pod \"1f7707be-b4dc-47c7-8a74-bc46399acd36\" (UID: \"1f7707be-b4dc-47c7-8a74-bc46399acd36\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944552 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ce8d91-a436-4fe6-b5fd-1988e588ded8-catalog-content\") pod \"21ce8d91-a436-4fe6-b5fd-1988e588ded8\" (UID: \"21ce8d91-a436-4fe6-b5fd-1988e588ded8\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944621 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtnvz\" (UniqueName: \"kubernetes.io/projected/b6d1c1c6-899d-4220-8f80-defae4ba56f0-kube-api-access-mtnvz\") pod \"b6d1c1c6-899d-4220-8f80-defae4ba56f0\" (UID: \"b6d1c1c6-899d-4220-8f80-defae4ba56f0\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944648 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f27f899c-e2d8-4601-9a36-4582192436b7-utilities\") pod \"f27f899c-e2d8-4601-9a36-4582192436b7\" (UID: \"f27f899c-e2d8-4601-9a36-4582192436b7\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944667 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f27f899c-e2d8-4601-9a36-4582192436b7-catalog-content\") pod \"f27f899c-e2d8-4601-9a36-4582192436b7\" (UID: \"f27f899c-e2d8-4601-9a36-4582192436b7\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944688 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gq2jw\" (UniqueName: \"kubernetes.io/projected/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-kube-api-access-gq2jw\") pod \"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2\" (UID: \"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944709 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-utilities\") pod \"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6\" (UID: \"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944725 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-catalog-content\") pod \"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6\" (UID: \"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944741 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-catalog-content\") pod \"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab\" (UID: \"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944757 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e834244-05c0-4e48-9e2a-7c69cf930951-catalog-content\") pod \"2e834244-05c0-4e48-9e2a-7c69cf930951\" (UID: \"2e834244-05c0-4e48-9e2a-7c69cf930951\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944776 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v76pr\" (UniqueName: \"kubernetes.io/projected/21ce8d91-a436-4fe6-b5fd-1988e588ded8-kube-api-access-v76pr\") pod \"21ce8d91-a436-4fe6-b5fd-1988e588ded8\" (UID: \"21ce8d91-a436-4fe6-b5fd-1988e588ded8\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944792 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ce8d91-a436-4fe6-b5fd-1988e588ded8-utilities\") pod \"21ce8d91-a436-4fe6-b5fd-1988e588ded8\" (UID: \"21ce8d91-a436-4fe6-b5fd-1988e588ded8\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944842 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f5g2\" (UniqueName: \"kubernetes.io/projected/2e834244-05c0-4e48-9e2a-7c69cf930951-kube-api-access-8f5g2\") pod \"2e834244-05c0-4e48-9e2a-7c69cf930951\" (UID: \"2e834244-05c0-4e48-9e2a-7c69cf930951\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944872 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztsqf\" (UniqueName: \"kubernetes.io/projected/1f7707be-b4dc-47c7-8a74-bc46399acd36-kube-api-access-ztsqf\") pod \"1f7707be-b4dc-47c7-8a74-bc46399acd36\" (UID: \"1f7707be-b4dc-47c7-8a74-bc46399acd36\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944902 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d1c1c6-899d-4220-8f80-defae4ba56f0-catalog-content\") pod \"b6d1c1c6-899d-4220-8f80-defae4ba56f0\" (UID: \"b6d1c1c6-899d-4220-8f80-defae4ba56f0\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944926 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-marketplace-trusted-ca\") pod \"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2\" (UID: \"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.944947 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e834244-05c0-4e48-9e2a-7c69cf930951-utilities\") pod \"2e834244-05c0-4e48-9e2a-7c69cf930951\" (UID: \"2e834244-05c0-4e48-9e2a-7c69cf930951\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.945002 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d1c1c6-899d-4220-8f80-defae4ba56f0-utilities\") pod \"b6d1c1c6-899d-4220-8f80-defae4ba56f0\" (UID: \"b6d1c1c6-899d-4220-8f80-defae4ba56f0\") " Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.945219 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cf2e46b-a553-4b29-b6f2-02072b8660d9-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.945236 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmwl4\" (UniqueName: \"kubernetes.io/projected/4cf2e46b-a553-4b29-b6f2-02072b8660d9-kube-api-access-rmwl4\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.945247 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cf2e46b-a553-4b29-b6f2-02072b8660d9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.946227 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6d1c1c6-899d-4220-8f80-defae4ba56f0-utilities" (OuterVolumeSpecName: "utilities") pod "b6d1c1c6-899d-4220-8f80-defae4ba56f0" (UID: "b6d1c1c6-899d-4220-8f80-defae4ba56f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.947356 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-utilities" (OuterVolumeSpecName: "utilities") pod "0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" (UID: "0683c2f1-5695-4ef3-b6cc-31fe804c6dc6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.947906 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21ce8d91-a436-4fe6-b5fd-1988e588ded8-utilities" (OuterVolumeSpecName: "utilities") pod "21ce8d91-a436-4fe6-b5fd-1988e588ded8" (UID: "21ce8d91-a436-4fe6-b5fd-1988e588ded8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.950141 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6d1c1c6-899d-4220-8f80-defae4ba56f0-kube-api-access-mtnvz" (OuterVolumeSpecName: "kube-api-access-mtnvz") pod "b6d1c1c6-899d-4220-8f80-defae4ba56f0" (UID: "b6d1c1c6-899d-4220-8f80-defae4ba56f0"). InnerVolumeSpecName "kube-api-access-mtnvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.950493 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-kube-api-access-mp526" (OuterVolumeSpecName: "kube-api-access-mp526") pod "0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" (UID: "0683c2f1-5695-4ef3-b6cc-31fe804c6dc6"). InnerVolumeSpecName "kube-api-access-mp526". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.950899 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f7707be-b4dc-47c7-8a74-bc46399acd36-utilities" (OuterVolumeSpecName: "utilities") pod "1f7707be-b4dc-47c7-8a74-bc46399acd36" (UID: "1f7707be-b4dc-47c7-8a74-bc46399acd36"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.951790 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-kube-api-access-gq2jw" (OuterVolumeSpecName: "kube-api-access-gq2jw") pod "0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2" (UID: "0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2"). InnerVolumeSpecName "kube-api-access-gq2jw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.954170 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f27f899c-e2d8-4601-9a36-4582192436b7-utilities" (OuterVolumeSpecName: "utilities") pod "f27f899c-e2d8-4601-9a36-4582192436b7" (UID: "f27f899c-e2d8-4601-9a36-4582192436b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.956497 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2" (UID: "0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.958956 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f27f899c-e2d8-4601-9a36-4582192436b7-kube-api-access-rzh4n" (OuterVolumeSpecName: "kube-api-access-rzh4n") pod "f27f899c-e2d8-4601-9a36-4582192436b7" (UID: "f27f899c-e2d8-4601-9a36-4582192436b7"). InnerVolumeSpecName "kube-api-access-rzh4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.960387 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2" (UID: "0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.960653 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21ce8d91-a436-4fe6-b5fd-1988e588ded8-kube-api-access-v76pr" (OuterVolumeSpecName: "kube-api-access-v76pr") pod "21ce8d91-a436-4fe6-b5fd-1988e588ded8" (UID: "21ce8d91-a436-4fe6-b5fd-1988e588ded8"). InnerVolumeSpecName "kube-api-access-v76pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.961616 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e834244-05c0-4e48-9e2a-7c69cf930951-utilities" (OuterVolumeSpecName: "utilities") pod "2e834244-05c0-4e48-9e2a-7c69cf930951" (UID: "2e834244-05c0-4e48-9e2a-7c69cf930951"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.963948 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-utilities" (OuterVolumeSpecName: "utilities") pod "fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" (UID: "fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.964299 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f7707be-b4dc-47c7-8a74-bc46399acd36-kube-api-access-ztsqf" (OuterVolumeSpecName: "kube-api-access-ztsqf") pod "1f7707be-b4dc-47c7-8a74-bc46399acd36" (UID: "1f7707be-b4dc-47c7-8a74-bc46399acd36"). InnerVolumeSpecName "kube-api-access-ztsqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.967004 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e834244-05c0-4e48-9e2a-7c69cf930951-kube-api-access-8f5g2" (OuterVolumeSpecName: "kube-api-access-8f5g2") pod "2e834244-05c0-4e48-9e2a-7c69cf930951" (UID: "2e834244-05c0-4e48-9e2a-7c69cf930951"). InnerVolumeSpecName "kube-api-access-8f5g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.970424 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-kube-api-access-nmkjt" (OuterVolumeSpecName: "kube-api-access-nmkjt") pod "fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" (UID: "fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab"). InnerVolumeSpecName "kube-api-access-nmkjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.990215 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f7707be-b4dc-47c7-8a74-bc46399acd36-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f7707be-b4dc-47c7-8a74-bc46399acd36" (UID: "1f7707be-b4dc-47c7-8a74-bc46399acd36"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:14:42 crc kubenswrapper[4867]: I0214 04:14:42.995409 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e834244-05c0-4e48-9e2a-7c69cf930951-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e834244-05c0-4e48-9e2a-7c69cf930951" (UID: "2e834244-05c0-4e48-9e2a-7c69cf930951"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.050386 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzh4n\" (UniqueName: \"kubernetes.io/projected/f27f899c-e2d8-4601-9a36-4582192436b7-kube-api-access-rzh4n\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.050603 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f7707be-b4dc-47c7-8a74-bc46399acd36-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.050679 4867 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.050739 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmkjt\" (UniqueName: \"kubernetes.io/projected/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-kube-api-access-nmkjt\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.050802 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mp526\" (UniqueName: \"kubernetes.io/projected/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-kube-api-access-mp526\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.050870 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f7707be-b4dc-47c7-8a74-bc46399acd36-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.050929 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtnvz\" (UniqueName: \"kubernetes.io/projected/b6d1c1c6-899d-4220-8f80-defae4ba56f0-kube-api-access-mtnvz\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.051289 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f27f899c-e2d8-4601-9a36-4582192436b7-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.051364 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gq2jw\" (UniqueName: \"kubernetes.io/projected/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-kube-api-access-gq2jw\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.051431 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.051527 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e834244-05c0-4e48-9e2a-7c69cf930951-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.051602 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v76pr\" (UniqueName: \"kubernetes.io/projected/21ce8d91-a436-4fe6-b5fd-1988e588ded8-kube-api-access-v76pr\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.051686 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ce8d91-a436-4fe6-b5fd-1988e588ded8-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.051761 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f5g2\" (UniqueName: \"kubernetes.io/projected/2e834244-05c0-4e48-9e2a-7c69cf930951-kube-api-access-8f5g2\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.051846 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztsqf\" (UniqueName: \"kubernetes.io/projected/1f7707be-b4dc-47c7-8a74-bc46399acd36-kube-api-access-ztsqf\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.052018 4867 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.052106 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e834244-05c0-4e48-9e2a-7c69cf930951-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.052234 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d1c1c6-899d-4220-8f80-defae4ba56f0-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.052343 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.051260 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-p82xp"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.070819 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f27f899c-e2d8-4601-9a36-4582192436b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f27f899c-e2d8-4601-9a36-4582192436b7" (UID: "f27f899c-e2d8-4601-9a36-4582192436b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.080151 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" (UID: "0683c2f1-5695-4ef3-b6cc-31fe804c6dc6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.084024 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6d1c1c6-899d-4220-8f80-defae4ba56f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6d1c1c6-899d-4220-8f80-defae4ba56f0" (UID: "b6d1c1c6-899d-4220-8f80-defae4ba56f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.140423 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" (UID: "fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.144032 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21ce8d91-a436-4fe6-b5fd-1988e588ded8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21ce8d91-a436-4fe6-b5fd-1988e588ded8" (UID: "21ce8d91-a436-4fe6-b5fd-1988e588ded8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.153930 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f27f899c-e2d8-4601-9a36-4582192436b7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.153965 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.153976 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.153986 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d1c1c6-899d-4220-8f80-defae4ba56f0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.153995 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ce8d91-a436-4fe6-b5fd-1988e588ded8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.326325 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jc878"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.383092 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvh7q" event={"ID":"2e834244-05c0-4e48-9e2a-7c69cf930951","Type":"ContainerDied","Data":"90d63cc6554a718e0d4cbfb1e7b6d2e1fdaca86fdf3238edfbe5d97515589316"} Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.383169 4867 scope.go:117] "RemoveContainer" containerID="d4d72b2ebbd17189ee349d8b4d6304ac52d50866cfe1895c6576cff0ec95c46e" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.383177 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gvh7q" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.385001 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" event={"ID":"0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2","Type":"ContainerDied","Data":"0b46292ee8547b3f863b2a98bb8fb2cf8703a9757ad76735d9fe0ebd6ef2ffbd"} Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.385051 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mkw9h" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.387012 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vs6k" event={"ID":"b6d1c1c6-899d-4220-8f80-defae4ba56f0","Type":"ContainerDied","Data":"9ac639b6394c5e1017aeaf569eada5d729a39bf526b8497bd4296ca3b0755153"} Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.387114 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8vs6k" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.391395 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5mz22" event={"ID":"4cf2e46b-a553-4b29-b6f2-02072b8660d9","Type":"ContainerDied","Data":"23ddca82e7ec32caacf54a7cebc1ffb43fed1e460daeba077f08fce659c5713c"} Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.391429 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5mz22" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.394283 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9vq9" event={"ID":"21ce8d91-a436-4fe6-b5fd-1988e588ded8","Type":"ContainerDied","Data":"4782354a698fe401c643d9fa5567f3591df600cf5a8f25b16b237312263df503"} Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.394641 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n9vq9" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.396600 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x4khs" event={"ID":"f27f899c-e2d8-4601-9a36-4582192436b7","Type":"ContainerDied","Data":"3e5452fa8e8c6fb391a2e17ab4b7c984074e14d79a0538110dcd9e41b18bd839"} Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.396637 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x4khs" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.398569 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2cjxf" event={"ID":"0683c2f1-5695-4ef3-b6cc-31fe804c6dc6","Type":"ContainerDied","Data":"add894549a2aff626db3cd5482bf5486b20d694394b5286fe468f9059e3f4b1d"} Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.398649 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2cjxf" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.398965 4867 scope.go:117] "RemoveContainer" containerID="7e50404d86dfa5abaa30ac013da7f00871fba46895499f9f17afba5a612ece63" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.408549 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8hwg" event={"ID":"1f7707be-b4dc-47c7-8a74-bc46399acd36","Type":"ContainerDied","Data":"9414f47d96386d3ff0af0fa0050f52950e5a9a8e484274e0b79dd8bd6d0a669b"} Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.408637 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s8hwg" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.412625 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jc878" event={"ID":"fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab","Type":"ContainerDied","Data":"873ab4fab8bcde5b4877631fe5b476f986fe024be500dd128844b9b8ff975f35"} Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.412698 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jc878" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.417251 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" event={"ID":"33b576d8-f768-4fd2-895d-7d4ababe8714","Type":"ContainerStarted","Data":"816ecbead5e006e5b927df8e1b250bfef25e06ac1f4af4b58cde8881814d60ac"} Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.417297 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" event={"ID":"33b576d8-f768-4fd2-895d-7d4ababe8714","Type":"ContainerStarted","Data":"0825a46fff7992e99f90d4a3200834f03176e7548c5fc3621a0c63e09014fe8b"} Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.418213 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.420292 4867 scope.go:117] "RemoveContainer" containerID="5ea24da634c74fd4522707557b46ec23669f943631ddc2b04acda4a65985a65f" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.420337 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvh7q"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.422637 4867 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-p82xp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.60:8080/healthz\": dial tcp 10.217.0.60:8080: connect: connection refused" start-of-body= Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.422705 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" podUID="33b576d8-f768-4fd2-895d-7d4ababe8714" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.60:8080/healthz\": dial tcp 10.217.0.60:8080: connect: connection refused" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.432388 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvh7q"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.439791 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mkw9h"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.450454 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mkw9h"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.451616 4867 scope.go:117] "RemoveContainer" containerID="51dd7926e1bc9104319614773b3ee71539ad753d4fb48a3fd7a135d20615274f" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.457743 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5mz22"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.464236 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5mz22"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.470058 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" podStartSLOduration=1.470033726 podStartE2EDuration="1.470033726s" podCreationTimestamp="2026-02-14 04:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:14:43.462520981 +0000 UTC m=+315.543458315" watchObservedRunningTime="2026-02-14 04:14:43.470033726 +0000 UTC m=+315.550971040" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.473381 4867 scope.go:117] "RemoveContainer" containerID="fde717817968c374eed933a0aba80886281d640f0cd7b277b1cbd496e7430898" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.482293 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2cjxf"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.486550 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2cjxf"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.493021 4867 scope.go:117] "RemoveContainer" containerID="c9315920968c94ddf5477e0bdd603b5b8e9cbf807eefba671df93e2d03e2c2f6" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.500171 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n9vq9"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.503156 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n9vq9"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.509657 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jc878"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.514594 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jc878"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.518238 4867 scope.go:117] "RemoveContainer" containerID="3e14d895a14f4a0564f7f7e3c69189c69564a9ff087f2c6d784da1dda53743aa" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.523181 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8vs6k"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.528007 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8vs6k"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.532678 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x4khs"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.537835 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x4khs"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.547419 4867 scope.go:117] "RemoveContainer" containerID="c2877fef377b8448495213f1ba7610d513464667dbd0985d720e7b4e3414f0c3" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.551559 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s8hwg"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.555118 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s8hwg"] Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.563484 4867 scope.go:117] "RemoveContainer" containerID="07dc86f27711b42c0f0c70d02bf821bf6e645caa1d382d2a371675cf0f568e78" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.577192 4867 scope.go:117] "RemoveContainer" containerID="af97fea8edd2f6f86bfcc865565c17f7057a140b45a31735d974db6d18d89c4d" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.593460 4867 scope.go:117] "RemoveContainer" containerID="59d20d766b1edd844acfd10fcac06c637f2be95f509a76f1883642ffba8f4bdb" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.607173 4867 scope.go:117] "RemoveContainer" containerID="1874a10e5b67d2e6bb513881074d5bce2e31adc733159821fa403df5a755105e" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.622857 4867 scope.go:117] "RemoveContainer" containerID="743ba93f76979f5c122f709823ba46e2f882af89613e670bb5a5b1a6bbf930e3" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.635237 4867 scope.go:117] "RemoveContainer" containerID="ce8e3a0d75f26f463ddb328420cf33514070ab3b090d2f2c0466cda65d982931" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.650648 4867 scope.go:117] "RemoveContainer" containerID="fc1f0bd8f7009d70b8d79a2619856a470a226829cf0b6491da5a920f404a7708" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.671704 4867 scope.go:117] "RemoveContainer" containerID="a4ecefe0bd25ea2146d501e1e030f255aa760e1d3b80ec52600bc04dede7435e" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.686148 4867 scope.go:117] "RemoveContainer" containerID="118aa202ac601ceca70d20070e2eef726e85bdc481297be9216162c3fbf1dc32" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.699630 4867 scope.go:117] "RemoveContainer" containerID="85287bd98780c8d28545ae3a7b154f6ba33f7e022b07f74e2ecc3b8f424c43cb" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.713161 4867 scope.go:117] "RemoveContainer" containerID="7e41463addb663f771a8a5f2b9e7c4873429544544dd6087d30ba5633e2b13ff" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.725190 4867 scope.go:117] "RemoveContainer" containerID="7fb020ae5c17769ac38af08639b438690daf523e3453b2d4607be04e3eed31f6" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.737288 4867 scope.go:117] "RemoveContainer" containerID="984fdfc85b05392cc72c5c84de4475acfa58af432c2af35475c4d0530104a422" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.748405 4867 scope.go:117] "RemoveContainer" containerID="74feb7884ba2418ee7d549ee5577cf3938f772233b39e1dc8f5cc302e9984613" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.758484 4867 scope.go:117] "RemoveContainer" containerID="60ffc454fecb09f395b2cdd3ab6338fbcdb34866e0895ad196ee1967f60209e8" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.771718 4867 scope.go:117] "RemoveContainer" containerID="a9a5891bbec4b4da6c9ef36e2dd93f2b54465511a9b15a7d390a7176eb2c82b4" Feb 14 04:14:43 crc kubenswrapper[4867]: I0214 04:14:43.789387 4867 scope.go:117] "RemoveContainer" containerID="32411749279c49995d30b3666ff88537eeae29bee0a978d984c3e86a4c392864" Feb 14 04:14:44 crc kubenswrapper[4867]: I0214 04:14:44.446776 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" Feb 14 04:14:44 crc kubenswrapper[4867]: I0214 04:14:44.511932 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 14 04:14:45 crc kubenswrapper[4867]: I0214 04:14:45.005008 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" path="/var/lib/kubelet/pods/0683c2f1-5695-4ef3-b6cc-31fe804c6dc6/volumes" Feb 14 04:14:45 crc kubenswrapper[4867]: I0214 04:14:45.005860 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2" path="/var/lib/kubelet/pods/0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2/volumes" Feb 14 04:14:45 crc kubenswrapper[4867]: I0214 04:14:45.006315 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" path="/var/lib/kubelet/pods/1f7707be-b4dc-47c7-8a74-bc46399acd36/volumes" Feb 14 04:14:45 crc kubenswrapper[4867]: I0214 04:14:45.007395 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21ce8d91-a436-4fe6-b5fd-1988e588ded8" path="/var/lib/kubelet/pods/21ce8d91-a436-4fe6-b5fd-1988e588ded8/volumes" Feb 14 04:14:45 crc kubenswrapper[4867]: I0214 04:14:45.008065 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" path="/var/lib/kubelet/pods/2e834244-05c0-4e48-9e2a-7c69cf930951/volumes" Feb 14 04:14:45 crc kubenswrapper[4867]: I0214 04:14:45.009160 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cf2e46b-a553-4b29-b6f2-02072b8660d9" path="/var/lib/kubelet/pods/4cf2e46b-a553-4b29-b6f2-02072b8660d9/volumes" Feb 14 04:14:45 crc kubenswrapper[4867]: I0214 04:14:45.009854 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" path="/var/lib/kubelet/pods/b6d1c1c6-899d-4220-8f80-defae4ba56f0/volumes" Feb 14 04:14:45 crc kubenswrapper[4867]: I0214 04:14:45.010452 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" path="/var/lib/kubelet/pods/f27f899c-e2d8-4601-9a36-4582192436b7/volumes" Feb 14 04:14:45 crc kubenswrapper[4867]: I0214 04:14:45.011381 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" path="/var/lib/kubelet/pods/fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab/volumes" Feb 14 04:14:46 crc kubenswrapper[4867]: I0214 04:14:46.788532 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 14 04:14:47 crc kubenswrapper[4867]: I0214 04:14:47.499631 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 14 04:14:48 crc kubenswrapper[4867]: I0214 04:14:48.278670 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 14 04:14:52 crc kubenswrapper[4867]: I0214 04:14:52.903893 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 14 04:14:54 crc kubenswrapper[4867]: I0214 04:14:54.469031 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 14 04:14:55 crc kubenswrapper[4867]: I0214 04:14:55.511848 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 14 04:14:57 crc kubenswrapper[4867]: I0214 04:14:57.370336 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 14 04:14:59 crc kubenswrapper[4867]: I0214 04:14:59.315351 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.161877 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp"] Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162080 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" containerName="extract-utilities" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162093 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" containerName="extract-utilities" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162104 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162110 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162122 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21ce8d91-a436-4fe6-b5fd-1988e588ded8" containerName="extract-utilities" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162130 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="21ce8d91-a436-4fe6-b5fd-1988e588ded8" containerName="extract-utilities" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162138 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" containerName="extract-content" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162144 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" containerName="extract-content" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162151 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162156 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162164 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" containerName="extract-utilities" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162170 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" containerName="extract-utilities" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162176 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" containerName="extract-content" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162183 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" containerName="extract-content" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162190 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2" containerName="marketplace-operator" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162196 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2" containerName="marketplace-operator" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162203 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" containerName="extract-utilities" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162211 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" containerName="extract-utilities" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162220 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21ce8d91-a436-4fe6-b5fd-1988e588ded8" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162227 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="21ce8d91-a436-4fe6-b5fd-1988e588ded8" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162237 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" containerName="extract-content" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162245 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" containerName="extract-content" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162253 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" containerName="extract-content" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162260 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" containerName="extract-content" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162268 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" containerName="extract-content" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162275 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" containerName="extract-content" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162285 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21ce8d91-a436-4fe6-b5fd-1988e588ded8" containerName="extract-content" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162293 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="21ce8d91-a436-4fe6-b5fd-1988e588ded8" containerName="extract-content" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162302 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162309 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162317 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf2e46b-a553-4b29-b6f2-02072b8660d9" containerName="extract-content" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162324 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf2e46b-a553-4b29-b6f2-02072b8660d9" containerName="extract-content" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162332 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162339 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162348 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf2e46b-a553-4b29-b6f2-02072b8660d9" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162355 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf2e46b-a553-4b29-b6f2-02072b8660d9" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162366 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" containerName="extract-content" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162373 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" containerName="extract-content" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162383 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162391 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162404 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" containerName="extract-utilities" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162413 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" containerName="extract-utilities" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162422 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" containerName="extract-utilities" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162429 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" containerName="extract-utilities" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162440 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cf2e46b-a553-4b29-b6f2-02072b8660d9" containerName="extract-utilities" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162448 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cf2e46b-a553-4b29-b6f2-02072b8660d9" containerName="extract-utilities" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162456 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" containerName="extract-utilities" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162463 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" containerName="extract-utilities" Feb 14 04:15:00 crc kubenswrapper[4867]: E0214 04:15:00.162472 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162479 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162611 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e834244-05c0-4e48-9e2a-7c69cf930951" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162630 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f2f3ff2-c75e-4bfa-a4c2-837ac309e4d2" containerName="marketplace-operator" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162638 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa2916d7-5ab2-47ca-b04a-2bc5e681d9ab" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162647 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6d1c1c6-899d-4220-8f80-defae4ba56f0" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162659 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f7707be-b4dc-47c7-8a74-bc46399acd36" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162668 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cf2e46b-a553-4b29-b6f2-02072b8660d9" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162681 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="0683c2f1-5695-4ef3-b6cc-31fe804c6dc6" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162691 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f27f899c-e2d8-4601-9a36-4582192436b7" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.162700 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="21ce8d91-a436-4fe6-b5fd-1988e588ded8" containerName="registry-server" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.163189 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.165942 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.166148 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.172054 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp"] Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.271685 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb80aae8-69eb-4098-af64-8a1ace025d53-config-volume\") pod \"collect-profiles-29517375-78vgp\" (UID: \"cb80aae8-69eb-4098-af64-8a1ace025d53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.271986 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb80aae8-69eb-4098-af64-8a1ace025d53-secret-volume\") pod \"collect-profiles-29517375-78vgp\" (UID: \"cb80aae8-69eb-4098-af64-8a1ace025d53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.272113 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgpxk\" (UniqueName: \"kubernetes.io/projected/cb80aae8-69eb-4098-af64-8a1ace025d53-kube-api-access-mgpxk\") pod \"collect-profiles-29517375-78vgp\" (UID: \"cb80aae8-69eb-4098-af64-8a1ace025d53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.373835 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb80aae8-69eb-4098-af64-8a1ace025d53-config-volume\") pod \"collect-profiles-29517375-78vgp\" (UID: \"cb80aae8-69eb-4098-af64-8a1ace025d53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.373892 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb80aae8-69eb-4098-af64-8a1ace025d53-secret-volume\") pod \"collect-profiles-29517375-78vgp\" (UID: \"cb80aae8-69eb-4098-af64-8a1ace025d53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.373955 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgpxk\" (UniqueName: \"kubernetes.io/projected/cb80aae8-69eb-4098-af64-8a1ace025d53-kube-api-access-mgpxk\") pod \"collect-profiles-29517375-78vgp\" (UID: \"cb80aae8-69eb-4098-af64-8a1ace025d53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.374851 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb80aae8-69eb-4098-af64-8a1ace025d53-config-volume\") pod \"collect-profiles-29517375-78vgp\" (UID: \"cb80aae8-69eb-4098-af64-8a1ace025d53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.379785 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb80aae8-69eb-4098-af64-8a1ace025d53-secret-volume\") pod \"collect-profiles-29517375-78vgp\" (UID: \"cb80aae8-69eb-4098-af64-8a1ace025d53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.389748 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgpxk\" (UniqueName: \"kubernetes.io/projected/cb80aae8-69eb-4098-af64-8a1ace025d53-kube-api-access-mgpxk\") pod \"collect-profiles-29517375-78vgp\" (UID: \"cb80aae8-69eb-4098-af64-8a1ace025d53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.532704 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp" Feb 14 04:15:00 crc kubenswrapper[4867]: I0214 04:15:00.908825 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp"] Feb 14 04:15:00 crc kubenswrapper[4867]: W0214 04:15:00.912144 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb80aae8_69eb_4098_af64_8a1ace025d53.slice/crio-2f5128266a3aa5b15601b3f70b02001dc0d696e8cc344294deb8d0622ea55e45 WatchSource:0}: Error finding container 2f5128266a3aa5b15601b3f70b02001dc0d696e8cc344294deb8d0622ea55e45: Status 404 returned error can't find the container with id 2f5128266a3aa5b15601b3f70b02001dc0d696e8cc344294deb8d0622ea55e45 Feb 14 04:15:01 crc kubenswrapper[4867]: I0214 04:15:01.528321 4867 generic.go:334] "Generic (PLEG): container finished" podID="cb80aae8-69eb-4098-af64-8a1ace025d53" containerID="5dc1b7ab37c9c3df2b530ac74d487ec3f80c14970b4446bee10e3a796e0af837" exitCode=0 Feb 14 04:15:01 crc kubenswrapper[4867]: I0214 04:15:01.528371 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp" event={"ID":"cb80aae8-69eb-4098-af64-8a1ace025d53","Type":"ContainerDied","Data":"5dc1b7ab37c9c3df2b530ac74d487ec3f80c14970b4446bee10e3a796e0af837"} Feb 14 04:15:01 crc kubenswrapper[4867]: I0214 04:15:01.528403 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp" event={"ID":"cb80aae8-69eb-4098-af64-8a1ace025d53","Type":"ContainerStarted","Data":"2f5128266a3aa5b15601b3f70b02001dc0d696e8cc344294deb8d0622ea55e45"} Feb 14 04:15:01 crc kubenswrapper[4867]: I0214 04:15:01.542357 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 14 04:15:01 crc kubenswrapper[4867]: I0214 04:15:01.966792 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 14 04:15:02 crc kubenswrapper[4867]: I0214 04:15:02.798972 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp" Feb 14 04:15:02 crc kubenswrapper[4867]: I0214 04:15:02.911165 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb80aae8-69eb-4098-af64-8a1ace025d53-secret-volume\") pod \"cb80aae8-69eb-4098-af64-8a1ace025d53\" (UID: \"cb80aae8-69eb-4098-af64-8a1ace025d53\") " Feb 14 04:15:02 crc kubenswrapper[4867]: I0214 04:15:02.911247 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb80aae8-69eb-4098-af64-8a1ace025d53-config-volume\") pod \"cb80aae8-69eb-4098-af64-8a1ace025d53\" (UID: \"cb80aae8-69eb-4098-af64-8a1ace025d53\") " Feb 14 04:15:02 crc kubenswrapper[4867]: I0214 04:15:02.911273 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgpxk\" (UniqueName: \"kubernetes.io/projected/cb80aae8-69eb-4098-af64-8a1ace025d53-kube-api-access-mgpxk\") pod \"cb80aae8-69eb-4098-af64-8a1ace025d53\" (UID: \"cb80aae8-69eb-4098-af64-8a1ace025d53\") " Feb 14 04:15:02 crc kubenswrapper[4867]: I0214 04:15:02.912107 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb80aae8-69eb-4098-af64-8a1ace025d53-config-volume" (OuterVolumeSpecName: "config-volume") pod "cb80aae8-69eb-4098-af64-8a1ace025d53" (UID: "cb80aae8-69eb-4098-af64-8a1ace025d53"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:02 crc kubenswrapper[4867]: I0214 04:15:02.916974 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb80aae8-69eb-4098-af64-8a1ace025d53-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cb80aae8-69eb-4098-af64-8a1ace025d53" (UID: "cb80aae8-69eb-4098-af64-8a1ace025d53"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:02 crc kubenswrapper[4867]: I0214 04:15:02.917011 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb80aae8-69eb-4098-af64-8a1ace025d53-kube-api-access-mgpxk" (OuterVolumeSpecName: "kube-api-access-mgpxk") pod "cb80aae8-69eb-4098-af64-8a1ace025d53" (UID: "cb80aae8-69eb-4098-af64-8a1ace025d53"). InnerVolumeSpecName "kube-api-access-mgpxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:15:03 crc kubenswrapper[4867]: I0214 04:15:03.015063 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb80aae8-69eb-4098-af64-8a1ace025d53-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:03 crc kubenswrapper[4867]: I0214 04:15:03.015092 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgpxk\" (UniqueName: \"kubernetes.io/projected/cb80aae8-69eb-4098-af64-8a1ace025d53-kube-api-access-mgpxk\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:03 crc kubenswrapper[4867]: I0214 04:15:03.015103 4867 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb80aae8-69eb-4098-af64-8a1ace025d53-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:03 crc kubenswrapper[4867]: I0214 04:15:03.538123 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp" event={"ID":"cb80aae8-69eb-4098-af64-8a1ace025d53","Type":"ContainerDied","Data":"2f5128266a3aa5b15601b3f70b02001dc0d696e8cc344294deb8d0622ea55e45"} Feb 14 04:15:03 crc kubenswrapper[4867]: I0214 04:15:03.538164 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f5128266a3aa5b15601b3f70b02001dc0d696e8cc344294deb8d0622ea55e45" Feb 14 04:15:03 crc kubenswrapper[4867]: I0214 04:15:03.538182 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp" Feb 14 04:15:05 crc kubenswrapper[4867]: I0214 04:15:05.456148 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 14 04:15:06 crc kubenswrapper[4867]: I0214 04:15:06.001106 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 14 04:15:06 crc kubenswrapper[4867]: I0214 04:15:06.418912 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 14 04:15:07 crc kubenswrapper[4867]: I0214 04:15:07.634122 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.321835 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-748d4597b7-zr2sc"] Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.322127 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" podUID="c312f687-8694-4be3-a1ac-ddb1a0e8e1e6" containerName="controller-manager" containerID="cri-o://d4aead393cb2b02a428fb28661f16918a1873ee0f2ed4a30857ac163193d3857" gracePeriod=30 Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.408368 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8"] Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.408617 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" podUID="b9320aa8-606f-42da-94c7-886ddd1a0646" containerName="route-controller-manager" containerID="cri-o://f157b04c5dcfd4a5e66739ecf3f255670013221d2f63682930806f03de907180" gracePeriod=30 Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.467289 4867 patch_prober.go:28] interesting pod/controller-manager-748d4597b7-zr2sc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" start-of-body= Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.467341 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" podUID="c312f687-8694-4be3-a1ac-ddb1a0e8e1e6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.589897 4867 generic.go:334] "Generic (PLEG): container finished" podID="b9320aa8-606f-42da-94c7-886ddd1a0646" containerID="f157b04c5dcfd4a5e66739ecf3f255670013221d2f63682930806f03de907180" exitCode=0 Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.589971 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" event={"ID":"b9320aa8-606f-42da-94c7-886ddd1a0646","Type":"ContainerDied","Data":"f157b04c5dcfd4a5e66739ecf3f255670013221d2f63682930806f03de907180"} Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.595023 4867 generic.go:334] "Generic (PLEG): container finished" podID="c312f687-8694-4be3-a1ac-ddb1a0e8e1e6" containerID="d4aead393cb2b02a428fb28661f16918a1873ee0f2ed4a30857ac163193d3857" exitCode=0 Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.595073 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" event={"ID":"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6","Type":"ContainerDied","Data":"d4aead393cb2b02a428fb28661f16918a1873ee0f2ed4a30857ac163193d3857"} Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.703681 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.749345 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.882247 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-config\") pod \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.882306 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9320aa8-606f-42da-94c7-886ddd1a0646-config\") pod \"b9320aa8-606f-42da-94c7-886ddd1a0646\" (UID: \"b9320aa8-606f-42da-94c7-886ddd1a0646\") " Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.882327 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-client-ca\") pod \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.882357 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sbq9\" (UniqueName: \"kubernetes.io/projected/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-kube-api-access-7sbq9\") pod \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.882377 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9320aa8-606f-42da-94c7-886ddd1a0646-client-ca\") pod \"b9320aa8-606f-42da-94c7-886ddd1a0646\" (UID: \"b9320aa8-606f-42da-94c7-886ddd1a0646\") " Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.882394 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-proxy-ca-bundles\") pod \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.882447 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-serving-cert\") pod \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\" (UID: \"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6\") " Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.882468 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9fs9\" (UniqueName: \"kubernetes.io/projected/b9320aa8-606f-42da-94c7-886ddd1a0646-kube-api-access-g9fs9\") pod \"b9320aa8-606f-42da-94c7-886ddd1a0646\" (UID: \"b9320aa8-606f-42da-94c7-886ddd1a0646\") " Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.882486 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9320aa8-606f-42da-94c7-886ddd1a0646-serving-cert\") pod \"b9320aa8-606f-42da-94c7-886ddd1a0646\" (UID: \"b9320aa8-606f-42da-94c7-886ddd1a0646\") " Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.883207 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9320aa8-606f-42da-94c7-886ddd1a0646-client-ca" (OuterVolumeSpecName: "client-ca") pod "b9320aa8-606f-42da-94c7-886ddd1a0646" (UID: "b9320aa8-606f-42da-94c7-886ddd1a0646"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.883354 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c312f687-8694-4be3-a1ac-ddb1a0e8e1e6" (UID: "c312f687-8694-4be3-a1ac-ddb1a0e8e1e6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.883421 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9320aa8-606f-42da-94c7-886ddd1a0646-config" (OuterVolumeSpecName: "config") pod "b9320aa8-606f-42da-94c7-886ddd1a0646" (UID: "b9320aa8-606f-42da-94c7-886ddd1a0646"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.883606 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9320aa8-606f-42da-94c7-886ddd1a0646-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.883626 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9320aa8-606f-42da-94c7-886ddd1a0646-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.883638 4867 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.883844 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-client-ca" (OuterVolumeSpecName: "client-ca") pod "c312f687-8694-4be3-a1ac-ddb1a0e8e1e6" (UID: "c312f687-8694-4be3-a1ac-ddb1a0e8e1e6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.883878 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-config" (OuterVolumeSpecName: "config") pod "c312f687-8694-4be3-a1ac-ddb1a0e8e1e6" (UID: "c312f687-8694-4be3-a1ac-ddb1a0e8e1e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.887595 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c312f687-8694-4be3-a1ac-ddb1a0e8e1e6" (UID: "c312f687-8694-4be3-a1ac-ddb1a0e8e1e6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.887741 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-kube-api-access-7sbq9" (OuterVolumeSpecName: "kube-api-access-7sbq9") pod "c312f687-8694-4be3-a1ac-ddb1a0e8e1e6" (UID: "c312f687-8694-4be3-a1ac-ddb1a0e8e1e6"). InnerVolumeSpecName "kube-api-access-7sbq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.887962 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9320aa8-606f-42da-94c7-886ddd1a0646-kube-api-access-g9fs9" (OuterVolumeSpecName: "kube-api-access-g9fs9") pod "b9320aa8-606f-42da-94c7-886ddd1a0646" (UID: "b9320aa8-606f-42da-94c7-886ddd1a0646"). InnerVolumeSpecName "kube-api-access-g9fs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.888353 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9320aa8-606f-42da-94c7-886ddd1a0646-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b9320aa8-606f-42da-94c7-886ddd1a0646" (UID: "b9320aa8-606f-42da-94c7-886ddd1a0646"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.984662 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.984732 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9fs9\" (UniqueName: \"kubernetes.io/projected/b9320aa8-606f-42da-94c7-886ddd1a0646-kube-api-access-g9fs9\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.984748 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9320aa8-606f-42da-94c7-886ddd1a0646-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.984758 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.984766 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:08 crc kubenswrapper[4867]: I0214 04:15:08.984776 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sbq9\" (UniqueName: \"kubernetes.io/projected/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6-kube-api-access-7sbq9\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.601002 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" event={"ID":"b9320aa8-606f-42da-94c7-886ddd1a0646","Type":"ContainerDied","Data":"541ea6e9e6c3a77aac7816654698f9c602bfc9a3197a2fd757215b2f093807ec"} Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.601068 4867 scope.go:117] "RemoveContainer" containerID="f157b04c5dcfd4a5e66739ecf3f255670013221d2f63682930806f03de907180" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.601349 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.602236 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" event={"ID":"c312f687-8694-4be3-a1ac-ddb1a0e8e1e6","Type":"ContainerDied","Data":"e4132b3ddfc13f1765cbd4d8f6a797c02ea70c5da037aeea7a90fb80fbf566d7"} Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.602306 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-748d4597b7-zr2sc" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.616610 4867 scope.go:117] "RemoveContainer" containerID="d4aead393cb2b02a428fb28661f16918a1873ee0f2ed4a30857ac163193d3857" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.620157 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8"] Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.623820 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-74548f6c84-krdz8"] Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.628776 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-748d4597b7-zr2sc"] Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.631331 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-748d4597b7-zr2sc"] Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.931231 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj"] Feb 14 04:15:09 crc kubenswrapper[4867]: E0214 04:15:09.945242 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c312f687-8694-4be3-a1ac-ddb1a0e8e1e6" containerName="controller-manager" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.945279 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c312f687-8694-4be3-a1ac-ddb1a0e8e1e6" containerName="controller-manager" Feb 14 04:15:09 crc kubenswrapper[4867]: E0214 04:15:09.945307 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9320aa8-606f-42da-94c7-886ddd1a0646" containerName="route-controller-manager" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.945314 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9320aa8-606f-42da-94c7-886ddd1a0646" containerName="route-controller-manager" Feb 14 04:15:09 crc kubenswrapper[4867]: E0214 04:15:09.945322 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb80aae8-69eb-4098-af64-8a1ace025d53" containerName="collect-profiles" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.945328 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb80aae8-69eb-4098-af64-8a1ace025d53" containerName="collect-profiles" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.945489 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb80aae8-69eb-4098-af64-8a1ace025d53" containerName="collect-profiles" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.945517 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9320aa8-606f-42da-94c7-886ddd1a0646" containerName="route-controller-manager" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.945526 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c312f687-8694-4be3-a1ac-ddb1a0e8e1e6" containerName="controller-manager" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.946151 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq"] Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.946344 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.947554 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.948343 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.953204 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.953611 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq"] Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.953693 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.953928 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.954093 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.954246 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.954412 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.954576 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.954704 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.954896 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.955028 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.955065 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.959191 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 14 04:15:09 crc kubenswrapper[4867]: I0214 04:15:09.962670 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj"] Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.099174 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e50093fe-87a2-46d7-aab7-3bf4179dc49b-config\") pod \"route-controller-manager-f78cb94dd-pp8qj\" (UID: \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\") " pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.099216 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dvsv\" (UniqueName: \"kubernetes.io/projected/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-kube-api-access-6dvsv\") pod \"controller-manager-5cb8bf5b5c-f5pvq\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.099262 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-proxy-ca-bundles\") pod \"controller-manager-5cb8bf5b5c-f5pvq\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.099283 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e50093fe-87a2-46d7-aab7-3bf4179dc49b-serving-cert\") pod \"route-controller-manager-f78cb94dd-pp8qj\" (UID: \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\") " pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.099390 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-client-ca\") pod \"controller-manager-5cb8bf5b5c-f5pvq\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.099529 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-config\") pod \"controller-manager-5cb8bf5b5c-f5pvq\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.099577 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-serving-cert\") pod \"controller-manager-5cb8bf5b5c-f5pvq\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.099604 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e50093fe-87a2-46d7-aab7-3bf4179dc49b-client-ca\") pod \"route-controller-manager-f78cb94dd-pp8qj\" (UID: \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\") " pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.099648 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlwrf\" (UniqueName: \"kubernetes.io/projected/e50093fe-87a2-46d7-aab7-3bf4179dc49b-kube-api-access-jlwrf\") pod \"route-controller-manager-f78cb94dd-pp8qj\" (UID: \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\") " pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.200592 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-client-ca\") pod \"controller-manager-5cb8bf5b5c-f5pvq\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.200703 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-config\") pod \"controller-manager-5cb8bf5b5c-f5pvq\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.200913 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-serving-cert\") pod \"controller-manager-5cb8bf5b5c-f5pvq\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.200944 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e50093fe-87a2-46d7-aab7-3bf4179dc49b-client-ca\") pod \"route-controller-manager-f78cb94dd-pp8qj\" (UID: \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\") " pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.201236 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlwrf\" (UniqueName: \"kubernetes.io/projected/e50093fe-87a2-46d7-aab7-3bf4179dc49b-kube-api-access-jlwrf\") pod \"route-controller-manager-f78cb94dd-pp8qj\" (UID: \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\") " pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.201303 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e50093fe-87a2-46d7-aab7-3bf4179dc49b-config\") pod \"route-controller-manager-f78cb94dd-pp8qj\" (UID: \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\") " pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.201332 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dvsv\" (UniqueName: \"kubernetes.io/projected/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-kube-api-access-6dvsv\") pod \"controller-manager-5cb8bf5b5c-f5pvq\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.201360 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-proxy-ca-bundles\") pod \"controller-manager-5cb8bf5b5c-f5pvq\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.201387 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e50093fe-87a2-46d7-aab7-3bf4179dc49b-serving-cert\") pod \"route-controller-manager-f78cb94dd-pp8qj\" (UID: \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\") " pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.202735 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e50093fe-87a2-46d7-aab7-3bf4179dc49b-client-ca\") pod \"route-controller-manager-f78cb94dd-pp8qj\" (UID: \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\") " pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.202801 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-client-ca\") pod \"controller-manager-5cb8bf5b5c-f5pvq\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.203028 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e50093fe-87a2-46d7-aab7-3bf4179dc49b-config\") pod \"route-controller-manager-f78cb94dd-pp8qj\" (UID: \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\") " pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.204596 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-config\") pod \"controller-manager-5cb8bf5b5c-f5pvq\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.205684 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-proxy-ca-bundles\") pod \"controller-manager-5cb8bf5b5c-f5pvq\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.206451 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e50093fe-87a2-46d7-aab7-3bf4179dc49b-serving-cert\") pod \"route-controller-manager-f78cb94dd-pp8qj\" (UID: \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\") " pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.212226 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-serving-cert\") pod \"controller-manager-5cb8bf5b5c-f5pvq\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.223829 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dvsv\" (UniqueName: \"kubernetes.io/projected/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-kube-api-access-6dvsv\") pod \"controller-manager-5cb8bf5b5c-f5pvq\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.227329 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlwrf\" (UniqueName: \"kubernetes.io/projected/e50093fe-87a2-46d7-aab7-3bf4179dc49b-kube-api-access-jlwrf\") pod \"route-controller-manager-f78cb94dd-pp8qj\" (UID: \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\") " pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.266118 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.274501 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.570314 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj"] Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.609796 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" event={"ID":"e50093fe-87a2-46d7-aab7-3bf4179dc49b","Type":"ContainerStarted","Data":"a9053638ca02cf4b81c623ee2fa7a93b209439119614460e70b88e72705e85c2"} Feb 14 04:15:10 crc kubenswrapper[4867]: I0214 04:15:10.715500 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq"] Feb 14 04:15:10 crc kubenswrapper[4867]: W0214 04:15:10.719299 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0e49f0b_ad6d_49a7_a2a3_10cba6dd6ac2.slice/crio-360828fdf971e05929afc0cdacaf1fd44127fa200f8fc77f1916b7cb060bcb94 WatchSource:0}: Error finding container 360828fdf971e05929afc0cdacaf1fd44127fa200f8fc77f1916b7cb060bcb94: Status 404 returned error can't find the container with id 360828fdf971e05929afc0cdacaf1fd44127fa200f8fc77f1916b7cb060bcb94 Feb 14 04:15:11 crc kubenswrapper[4867]: I0214 04:15:11.004653 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9320aa8-606f-42da-94c7-886ddd1a0646" path="/var/lib/kubelet/pods/b9320aa8-606f-42da-94c7-886ddd1a0646/volumes" Feb 14 04:15:11 crc kubenswrapper[4867]: I0214 04:15:11.005607 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c312f687-8694-4be3-a1ac-ddb1a0e8e1e6" path="/var/lib/kubelet/pods/c312f687-8694-4be3-a1ac-ddb1a0e8e1e6/volumes" Feb 14 04:15:11 crc kubenswrapper[4867]: I0214 04:15:11.617228 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" event={"ID":"e50093fe-87a2-46d7-aab7-3bf4179dc49b","Type":"ContainerStarted","Data":"cec0c2937bd7622aca6d6cadfea46713d67a14b0de8cfcd88c7d84ab9a7580e2"} Feb 14 04:15:11 crc kubenswrapper[4867]: I0214 04:15:11.617499 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:11 crc kubenswrapper[4867]: I0214 04:15:11.618818 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" event={"ID":"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2","Type":"ContainerStarted","Data":"79b6df8ad449678d1ab295023a9a7003c72a1a06cb1ca593b34794017c19ab69"} Feb 14 04:15:11 crc kubenswrapper[4867]: I0214 04:15:11.618857 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" event={"ID":"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2","Type":"ContainerStarted","Data":"360828fdf971e05929afc0cdacaf1fd44127fa200f8fc77f1916b7cb060bcb94"} Feb 14 04:15:11 crc kubenswrapper[4867]: I0214 04:15:11.619484 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:11 crc kubenswrapper[4867]: I0214 04:15:11.623122 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:11 crc kubenswrapper[4867]: I0214 04:15:11.623380 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:11 crc kubenswrapper[4867]: I0214 04:15:11.633434 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" podStartSLOduration=3.633420018 podStartE2EDuration="3.633420018s" podCreationTimestamp="2026-02-14 04:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:15:11.632119585 +0000 UTC m=+343.713056899" watchObservedRunningTime="2026-02-14 04:15:11.633420018 +0000 UTC m=+343.714357322" Feb 14 04:15:11 crc kubenswrapper[4867]: I0214 04:15:11.659127 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" podStartSLOduration=3.659101984 podStartE2EDuration="3.659101984s" podCreationTimestamp="2026-02-14 04:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:15:11.655912492 +0000 UTC m=+343.736849806" watchObservedRunningTime="2026-02-14 04:15:11.659101984 +0000 UTC m=+343.740039298" Feb 14 04:15:12 crc kubenswrapper[4867]: I0214 04:15:12.973479 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc"] Feb 14 04:15:12 crc kubenswrapper[4867]: I0214 04:15:12.974920 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc" Feb 14 04:15:12 crc kubenswrapper[4867]: I0214 04:15:12.977308 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 14 04:15:12 crc kubenswrapper[4867]: I0214 04:15:12.979273 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 14 04:15:12 crc kubenswrapper[4867]: I0214 04:15:12.979600 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Feb 14 04:15:12 crc kubenswrapper[4867]: I0214 04:15:12.979629 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 14 04:15:12 crc kubenswrapper[4867]: I0214 04:15:12.979794 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 14 04:15:12 crc kubenswrapper[4867]: I0214 04:15:12.985297 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc"] Feb 14 04:15:13 crc kubenswrapper[4867]: I0214 04:15:13.146970 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kskfl\" (UniqueName: \"kubernetes.io/projected/4a7e088b-b9a0-4187-9acc-601d315d8d0f-kube-api-access-kskfl\") pod \"cluster-monitoring-operator-6d5b84845-9zpdc\" (UID: \"4a7e088b-b9a0-4187-9acc-601d315d8d0f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc" Feb 14 04:15:13 crc kubenswrapper[4867]: I0214 04:15:13.147018 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4a7e088b-b9a0-4187-9acc-601d315d8d0f-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-9zpdc\" (UID: \"4a7e088b-b9a0-4187-9acc-601d315d8d0f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc" Feb 14 04:15:13 crc kubenswrapper[4867]: I0214 04:15:13.147053 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/4a7e088b-b9a0-4187-9acc-601d315d8d0f-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-9zpdc\" (UID: \"4a7e088b-b9a0-4187-9acc-601d315d8d0f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc" Feb 14 04:15:13 crc kubenswrapper[4867]: I0214 04:15:13.248063 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kskfl\" (UniqueName: \"kubernetes.io/projected/4a7e088b-b9a0-4187-9acc-601d315d8d0f-kube-api-access-kskfl\") pod \"cluster-monitoring-operator-6d5b84845-9zpdc\" (UID: \"4a7e088b-b9a0-4187-9acc-601d315d8d0f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc" Feb 14 04:15:13 crc kubenswrapper[4867]: I0214 04:15:13.248110 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4a7e088b-b9a0-4187-9acc-601d315d8d0f-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-9zpdc\" (UID: \"4a7e088b-b9a0-4187-9acc-601d315d8d0f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc" Feb 14 04:15:13 crc kubenswrapper[4867]: I0214 04:15:13.248143 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/4a7e088b-b9a0-4187-9acc-601d315d8d0f-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-9zpdc\" (UID: \"4a7e088b-b9a0-4187-9acc-601d315d8d0f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc" Feb 14 04:15:13 crc kubenswrapper[4867]: I0214 04:15:13.249076 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/4a7e088b-b9a0-4187-9acc-601d315d8d0f-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-9zpdc\" (UID: \"4a7e088b-b9a0-4187-9acc-601d315d8d0f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc" Feb 14 04:15:13 crc kubenswrapper[4867]: I0214 04:15:13.254053 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/4a7e088b-b9a0-4187-9acc-601d315d8d0f-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-9zpdc\" (UID: \"4a7e088b-b9a0-4187-9acc-601d315d8d0f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc" Feb 14 04:15:13 crc kubenswrapper[4867]: I0214 04:15:13.269909 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kskfl\" (UniqueName: \"kubernetes.io/projected/4a7e088b-b9a0-4187-9acc-601d315d8d0f-kube-api-access-kskfl\") pod \"cluster-monitoring-operator-6d5b84845-9zpdc\" (UID: \"4a7e088b-b9a0-4187-9acc-601d315d8d0f\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc" Feb 14 04:15:13 crc kubenswrapper[4867]: I0214 04:15:13.313026 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc" Feb 14 04:15:13 crc kubenswrapper[4867]: I0214 04:15:13.757101 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc"] Feb 14 04:15:13 crc kubenswrapper[4867]: W0214 04:15:13.761263 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a7e088b_b9a0_4187_9acc_601d315d8d0f.slice/crio-647fdd9b2a7612423e4347dbd873aa13f3962b07554aa3dbcda5defad0882482 WatchSource:0}: Error finding container 647fdd9b2a7612423e4347dbd873aa13f3962b07554aa3dbcda5defad0882482: Status 404 returned error can't find the container with id 647fdd9b2a7612423e4347dbd873aa13f3962b07554aa3dbcda5defad0882482 Feb 14 04:15:14 crc kubenswrapper[4867]: I0214 04:15:14.636462 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc" event={"ID":"4a7e088b-b9a0-4187-9acc-601d315d8d0f","Type":"ContainerStarted","Data":"647fdd9b2a7612423e4347dbd873aa13f3962b07554aa3dbcda5defad0882482"} Feb 14 04:15:14 crc kubenswrapper[4867]: I0214 04:15:14.727891 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 14 04:15:16 crc kubenswrapper[4867]: I0214 04:15:16.254825 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc"] Feb 14 04:15:16 crc kubenswrapper[4867]: I0214 04:15:16.255493 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 04:15:16 crc kubenswrapper[4867]: I0214 04:15:16.257086 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 14 04:15:16 crc kubenswrapper[4867]: I0214 04:15:16.258634 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-7rsz8" Feb 14 04:15:16 crc kubenswrapper[4867]: I0214 04:15:16.264754 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc"] Feb 14 04:15:16 crc kubenswrapper[4867]: I0214 04:15:16.387362 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-72mpc\" (UID: \"b967a9e8-e5f1-4c92-889a-1dd6adf747fd\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 04:15:16 crc kubenswrapper[4867]: I0214 04:15:16.489047 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-72mpc\" (UID: \"b967a9e8-e5f1-4c92-889a-1dd6adf747fd\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 04:15:16 crc kubenswrapper[4867]: E0214 04:15:16.489210 4867 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 14 04:15:16 crc kubenswrapper[4867]: E0214 04:15:16.489273 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates podName:b967a9e8-e5f1-4c92-889a-1dd6adf747fd nodeName:}" failed. No retries permitted until 2026-02-14 04:15:16.98925227 +0000 UTC m=+349.070189584 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-72mpc" (UID: "b967a9e8-e5f1-4c92-889a-1dd6adf747fd") : secret "prometheus-operator-admission-webhook-tls" not found Feb 14 04:15:16 crc kubenswrapper[4867]: I0214 04:15:16.647980 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc" event={"ID":"4a7e088b-b9a0-4187-9acc-601d315d8d0f","Type":"ContainerStarted","Data":"9d746188b23a7e6be17773e838adf45a22e82f6ed5dd0ae26f926e3c20c72059"} Feb 14 04:15:16 crc kubenswrapper[4867]: I0214 04:15:16.662725 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-9zpdc" podStartSLOduration=2.857978295 podStartE2EDuration="4.662708169s" podCreationTimestamp="2026-02-14 04:15:12 +0000 UTC" firstStartedPulling="2026-02-14 04:15:13.763227133 +0000 UTC m=+345.844164447" lastFinishedPulling="2026-02-14 04:15:15.567956997 +0000 UTC m=+347.648894321" observedRunningTime="2026-02-14 04:15:16.661236371 +0000 UTC m=+348.742173695" watchObservedRunningTime="2026-02-14 04:15:16.662708169 +0000 UTC m=+348.743645483" Feb 14 04:15:16 crc kubenswrapper[4867]: I0214 04:15:16.995806 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-72mpc\" (UID: \"b967a9e8-e5f1-4c92-889a-1dd6adf747fd\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 04:15:16 crc kubenswrapper[4867]: E0214 04:15:16.995990 4867 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 14 04:15:16 crc kubenswrapper[4867]: E0214 04:15:16.996086 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates podName:b967a9e8-e5f1-4c92-889a-1dd6adf747fd nodeName:}" failed. No retries permitted until 2026-02-14 04:15:17.996064504 +0000 UTC m=+350.077001818 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-72mpc" (UID: "b967a9e8-e5f1-4c92-889a-1dd6adf747fd") : secret "prometheus-operator-admission-webhook-tls" not found Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.014729 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq"] Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.014919 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" podUID="a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2" containerName="controller-manager" containerID="cri-o://79b6df8ad449678d1ab295023a9a7003c72a1a06cb1ca593b34794017c19ab69" gracePeriod=30 Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.030559 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj"] Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.030802 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" podUID="e50093fe-87a2-46d7-aab7-3bf4179dc49b" containerName="route-controller-manager" containerID="cri-o://cec0c2937bd7622aca6d6cadfea46713d67a14b0de8cfcd88c7d84ab9a7580e2" gracePeriod=30 Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.511073 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.596866 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.604166 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e50093fe-87a2-46d7-aab7-3bf4179dc49b-client-ca\") pod \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\" (UID: \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\") " Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.604222 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e50093fe-87a2-46d7-aab7-3bf4179dc49b-config\") pod \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\" (UID: \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\") " Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.604327 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlwrf\" (UniqueName: \"kubernetes.io/projected/e50093fe-87a2-46d7-aab7-3bf4179dc49b-kube-api-access-jlwrf\") pod \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\" (UID: \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\") " Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.604351 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e50093fe-87a2-46d7-aab7-3bf4179dc49b-serving-cert\") pod \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\" (UID: \"e50093fe-87a2-46d7-aab7-3bf4179dc49b\") " Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.605451 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e50093fe-87a2-46d7-aab7-3bf4179dc49b-client-ca" (OuterVolumeSpecName: "client-ca") pod "e50093fe-87a2-46d7-aab7-3bf4179dc49b" (UID: "e50093fe-87a2-46d7-aab7-3bf4179dc49b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.605935 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e50093fe-87a2-46d7-aab7-3bf4179dc49b-config" (OuterVolumeSpecName: "config") pod "e50093fe-87a2-46d7-aab7-3bf4179dc49b" (UID: "e50093fe-87a2-46d7-aab7-3bf4179dc49b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.612229 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e50093fe-87a2-46d7-aab7-3bf4179dc49b-kube-api-access-jlwrf" (OuterVolumeSpecName: "kube-api-access-jlwrf") pod "e50093fe-87a2-46d7-aab7-3bf4179dc49b" (UID: "e50093fe-87a2-46d7-aab7-3bf4179dc49b"). InnerVolumeSpecName "kube-api-access-jlwrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.613573 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e50093fe-87a2-46d7-aab7-3bf4179dc49b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e50093fe-87a2-46d7-aab7-3bf4179dc49b" (UID: "e50093fe-87a2-46d7-aab7-3bf4179dc49b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.655878 4867 generic.go:334] "Generic (PLEG): container finished" podID="e50093fe-87a2-46d7-aab7-3bf4179dc49b" containerID="cec0c2937bd7622aca6d6cadfea46713d67a14b0de8cfcd88c7d84ab9a7580e2" exitCode=0 Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.655990 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.656062 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" event={"ID":"e50093fe-87a2-46d7-aab7-3bf4179dc49b","Type":"ContainerDied","Data":"cec0c2937bd7622aca6d6cadfea46713d67a14b0de8cfcd88c7d84ab9a7580e2"} Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.656110 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj" event={"ID":"e50093fe-87a2-46d7-aab7-3bf4179dc49b","Type":"ContainerDied","Data":"a9053638ca02cf4b81c623ee2fa7a93b209439119614460e70b88e72705e85c2"} Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.656132 4867 scope.go:117] "RemoveContainer" containerID="cec0c2937bd7622aca6d6cadfea46713d67a14b0de8cfcd88c7d84ab9a7580e2" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.658174 4867 generic.go:334] "Generic (PLEG): container finished" podID="a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2" containerID="79b6df8ad449678d1ab295023a9a7003c72a1a06cb1ca593b34794017c19ab69" exitCode=0 Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.658272 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" event={"ID":"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2","Type":"ContainerDied","Data":"79b6df8ad449678d1ab295023a9a7003c72a1a06cb1ca593b34794017c19ab69"} Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.658308 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" event={"ID":"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2","Type":"ContainerDied","Data":"360828fdf971e05929afc0cdacaf1fd44127fa200f8fc77f1916b7cb060bcb94"} Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.658319 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.676891 4867 scope.go:117] "RemoveContainer" containerID="cec0c2937bd7622aca6d6cadfea46713d67a14b0de8cfcd88c7d84ab9a7580e2" Feb 14 04:15:17 crc kubenswrapper[4867]: E0214 04:15:17.677229 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cec0c2937bd7622aca6d6cadfea46713d67a14b0de8cfcd88c7d84ab9a7580e2\": container with ID starting with cec0c2937bd7622aca6d6cadfea46713d67a14b0de8cfcd88c7d84ab9a7580e2 not found: ID does not exist" containerID="cec0c2937bd7622aca6d6cadfea46713d67a14b0de8cfcd88c7d84ab9a7580e2" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.677261 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cec0c2937bd7622aca6d6cadfea46713d67a14b0de8cfcd88c7d84ab9a7580e2"} err="failed to get container status \"cec0c2937bd7622aca6d6cadfea46713d67a14b0de8cfcd88c7d84ab9a7580e2\": rpc error: code = NotFound desc = could not find container \"cec0c2937bd7622aca6d6cadfea46713d67a14b0de8cfcd88c7d84ab9a7580e2\": container with ID starting with cec0c2937bd7622aca6d6cadfea46713d67a14b0de8cfcd88c7d84ab9a7580e2 not found: ID does not exist" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.677281 4867 scope.go:117] "RemoveContainer" containerID="79b6df8ad449678d1ab295023a9a7003c72a1a06cb1ca593b34794017c19ab69" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.688957 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj"] Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.691323 4867 scope.go:117] "RemoveContainer" containerID="79b6df8ad449678d1ab295023a9a7003c72a1a06cb1ca593b34794017c19ab69" Feb 14 04:15:17 crc kubenswrapper[4867]: E0214 04:15:17.691732 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79b6df8ad449678d1ab295023a9a7003c72a1a06cb1ca593b34794017c19ab69\": container with ID starting with 79b6df8ad449678d1ab295023a9a7003c72a1a06cb1ca593b34794017c19ab69 not found: ID does not exist" containerID="79b6df8ad449678d1ab295023a9a7003c72a1a06cb1ca593b34794017c19ab69" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.691760 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79b6df8ad449678d1ab295023a9a7003c72a1a06cb1ca593b34794017c19ab69"} err="failed to get container status \"79b6df8ad449678d1ab295023a9a7003c72a1a06cb1ca593b34794017c19ab69\": rpc error: code = NotFound desc = could not find container \"79b6df8ad449678d1ab295023a9a7003c72a1a06cb1ca593b34794017c19ab69\": container with ID starting with 79b6df8ad449678d1ab295023a9a7003c72a1a06cb1ca593b34794017c19ab69 not found: ID does not exist" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.691972 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-f78cb94dd-pp8qj"] Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.705259 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-client-ca\") pod \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.705315 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-proxy-ca-bundles\") pod \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.705355 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-config\") pod \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.705380 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-serving-cert\") pod \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.705419 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dvsv\" (UniqueName: \"kubernetes.io/projected/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-kube-api-access-6dvsv\") pod \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\" (UID: \"a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2\") " Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.705645 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e50093fe-87a2-46d7-aab7-3bf4179dc49b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.705663 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e50093fe-87a2-46d7-aab7-3bf4179dc49b-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.705672 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlwrf\" (UniqueName: \"kubernetes.io/projected/e50093fe-87a2-46d7-aab7-3bf4179dc49b-kube-api-access-jlwrf\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.705681 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e50093fe-87a2-46d7-aab7-3bf4179dc49b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.706616 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2" (UID: "a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.706656 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-config" (OuterVolumeSpecName: "config") pod "a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2" (UID: "a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.706631 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-client-ca" (OuterVolumeSpecName: "client-ca") pod "a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2" (UID: "a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.708520 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-kube-api-access-6dvsv" (OuterVolumeSpecName: "kube-api-access-6dvsv") pod "a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2" (UID: "a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2"). InnerVolumeSpecName "kube-api-access-6dvsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.709759 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2" (UID: "a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.807547 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dvsv\" (UniqueName: \"kubernetes.io/projected/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-kube-api-access-6dvsv\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.807610 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.807642 4867 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.807668 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.807691 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.989081 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq"] Feb 14 04:15:17 crc kubenswrapper[4867]: I0214 04:15:17.995985 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5cb8bf5b5c-f5pvq"] Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.010042 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-72mpc\" (UID: \"b967a9e8-e5f1-4c92-889a-1dd6adf747fd\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 04:15:18 crc kubenswrapper[4867]: E0214 04:15:18.010273 4867 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 14 04:15:18 crc kubenswrapper[4867]: E0214 04:15:18.010348 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates podName:b967a9e8-e5f1-4c92-889a-1dd6adf747fd nodeName:}" failed. No retries permitted until 2026-02-14 04:15:20.010330607 +0000 UTC m=+352.091267931 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-72mpc" (UID: "b967a9e8-e5f1-4c92-889a-1dd6adf747fd") : secret "prometheus-operator-admission-webhook-tls" not found Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.938769 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-645fd87585-cg7sr"] Feb 14 04:15:18 crc kubenswrapper[4867]: E0214 04:15:18.939058 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e50093fe-87a2-46d7-aab7-3bf4179dc49b" containerName="route-controller-manager" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.939075 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e50093fe-87a2-46d7-aab7-3bf4179dc49b" containerName="route-controller-manager" Feb 14 04:15:18 crc kubenswrapper[4867]: E0214 04:15:18.939091 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2" containerName="controller-manager" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.939100 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2" containerName="controller-manager" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.939213 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e50093fe-87a2-46d7-aab7-3bf4179dc49b" containerName="route-controller-manager" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.939234 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2" containerName="controller-manager" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.939726 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.941455 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx"] Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.942181 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.946360 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.946791 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.946926 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.946792 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.947343 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.947396 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.949098 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.949324 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.949567 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.949778 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.951326 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.951464 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.962823 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx"] Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.965767 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-645fd87585-cg7sr"] Feb 14 04:15:18 crc kubenswrapper[4867]: I0214 04:15:18.966315 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.012879 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2" path="/var/lib/kubelet/pods/a0e49f0b-ad6d-49a7-a2a3-10cba6dd6ac2/volumes" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.015410 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e50093fe-87a2-46d7-aab7-3bf4179dc49b" path="/var/lib/kubelet/pods/e50093fe-87a2-46d7-aab7-3bf4179dc49b/volumes" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.123373 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd15dd24-0b64-4213-842f-5727fdedffaf-client-ca\") pod \"route-controller-manager-6dd4d98c55-vl8mx\" (UID: \"cd15dd24-0b64-4213-842f-5727fdedffaf\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.123473 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd15dd24-0b64-4213-842f-5727fdedffaf-config\") pod \"route-controller-manager-6dd4d98c55-vl8mx\" (UID: \"cd15dd24-0b64-4213-842f-5727fdedffaf\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.123655 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/460ab01d-a050-4210-8f77-1564c687b8aa-serving-cert\") pod \"controller-manager-645fd87585-cg7sr\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.123682 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-client-ca\") pod \"controller-manager-645fd87585-cg7sr\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.124613 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv2fh\" (UniqueName: \"kubernetes.io/projected/460ab01d-a050-4210-8f77-1564c687b8aa-kube-api-access-lv2fh\") pod \"controller-manager-645fd87585-cg7sr\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.124894 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd15dd24-0b64-4213-842f-5727fdedffaf-serving-cert\") pod \"route-controller-manager-6dd4d98c55-vl8mx\" (UID: \"cd15dd24-0b64-4213-842f-5727fdedffaf\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.125072 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-config\") pod \"controller-manager-645fd87585-cg7sr\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.125258 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhmp7\" (UniqueName: \"kubernetes.io/projected/cd15dd24-0b64-4213-842f-5727fdedffaf-kube-api-access-qhmp7\") pod \"route-controller-manager-6dd4d98c55-vl8mx\" (UID: \"cd15dd24-0b64-4213-842f-5727fdedffaf\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.125405 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-proxy-ca-bundles\") pod \"controller-manager-645fd87585-cg7sr\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.226557 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-proxy-ca-bundles\") pod \"controller-manager-645fd87585-cg7sr\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.226635 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd15dd24-0b64-4213-842f-5727fdedffaf-client-ca\") pod \"route-controller-manager-6dd4d98c55-vl8mx\" (UID: \"cd15dd24-0b64-4213-842f-5727fdedffaf\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.226709 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd15dd24-0b64-4213-842f-5727fdedffaf-config\") pod \"route-controller-manager-6dd4d98c55-vl8mx\" (UID: \"cd15dd24-0b64-4213-842f-5727fdedffaf\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.226804 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/460ab01d-a050-4210-8f77-1564c687b8aa-serving-cert\") pod \"controller-manager-645fd87585-cg7sr\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.226845 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-client-ca\") pod \"controller-manager-645fd87585-cg7sr\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.226935 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv2fh\" (UniqueName: \"kubernetes.io/projected/460ab01d-a050-4210-8f77-1564c687b8aa-kube-api-access-lv2fh\") pod \"controller-manager-645fd87585-cg7sr\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.226972 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd15dd24-0b64-4213-842f-5727fdedffaf-serving-cert\") pod \"route-controller-manager-6dd4d98c55-vl8mx\" (UID: \"cd15dd24-0b64-4213-842f-5727fdedffaf\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.227024 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-config\") pod \"controller-manager-645fd87585-cg7sr\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.227080 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhmp7\" (UniqueName: \"kubernetes.io/projected/cd15dd24-0b64-4213-842f-5727fdedffaf-kube-api-access-qhmp7\") pod \"route-controller-manager-6dd4d98c55-vl8mx\" (UID: \"cd15dd24-0b64-4213-842f-5727fdedffaf\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.228925 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-client-ca\") pod \"controller-manager-645fd87585-cg7sr\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.229000 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-proxy-ca-bundles\") pod \"controller-manager-645fd87585-cg7sr\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.229905 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd15dd24-0b64-4213-842f-5727fdedffaf-config\") pod \"route-controller-manager-6dd4d98c55-vl8mx\" (UID: \"cd15dd24-0b64-4213-842f-5727fdedffaf\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.230861 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-config\") pod \"controller-manager-645fd87585-cg7sr\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.234466 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd15dd24-0b64-4213-842f-5727fdedffaf-client-ca\") pod \"route-controller-manager-6dd4d98c55-vl8mx\" (UID: \"cd15dd24-0b64-4213-842f-5727fdedffaf\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.240739 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/460ab01d-a050-4210-8f77-1564c687b8aa-serving-cert\") pod \"controller-manager-645fd87585-cg7sr\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.240800 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd15dd24-0b64-4213-842f-5727fdedffaf-serving-cert\") pod \"route-controller-manager-6dd4d98c55-vl8mx\" (UID: \"cd15dd24-0b64-4213-842f-5727fdedffaf\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.248569 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhmp7\" (UniqueName: \"kubernetes.io/projected/cd15dd24-0b64-4213-842f-5727fdedffaf-kube-api-access-qhmp7\") pod \"route-controller-manager-6dd4d98c55-vl8mx\" (UID: \"cd15dd24-0b64-4213-842f-5727fdedffaf\") " pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.257814 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv2fh\" (UniqueName: \"kubernetes.io/projected/460ab01d-a050-4210-8f77-1564c687b8aa-kube-api-access-lv2fh\") pod \"controller-manager-645fd87585-cg7sr\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.284388 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.302147 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.494329 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx"] Feb 14 04:15:19 crc kubenswrapper[4867]: W0214 04:15:19.504604 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd15dd24_0b64_4213_842f_5727fdedffaf.slice/crio-f67f6d6f5857e795b0810abf5a8af2c6365a5e6e9a844dfa3bbdd069b8dcceb1 WatchSource:0}: Error finding container f67f6d6f5857e795b0810abf5a8af2c6365a5e6e9a844dfa3bbdd069b8dcceb1: Status 404 returned error can't find the container with id f67f6d6f5857e795b0810abf5a8af2c6365a5e6e9a844dfa3bbdd069b8dcceb1 Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.532825 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-645fd87585-cg7sr"] Feb 14 04:15:19 crc kubenswrapper[4867]: W0214 04:15:19.540082 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod460ab01d_a050_4210_8f77_1564c687b8aa.slice/crio-ec17f4737d4f6752779dbdb60d879bee862c16976ddcbbba41458c6d682fa9fe WatchSource:0}: Error finding container ec17f4737d4f6752779dbdb60d879bee862c16976ddcbbba41458c6d682fa9fe: Status 404 returned error can't find the container with id ec17f4737d4f6752779dbdb60d879bee862c16976ddcbbba41458c6d682fa9fe Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.671430 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" event={"ID":"460ab01d-a050-4210-8f77-1564c687b8aa","Type":"ContainerStarted","Data":"ec17f4737d4f6752779dbdb60d879bee862c16976ddcbbba41458c6d682fa9fe"} Feb 14 04:15:19 crc kubenswrapper[4867]: I0214 04:15:19.672522 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" event={"ID":"cd15dd24-0b64-4213-842f-5727fdedffaf","Type":"ContainerStarted","Data":"f67f6d6f5857e795b0810abf5a8af2c6365a5e6e9a844dfa3bbdd069b8dcceb1"} Feb 14 04:15:20 crc kubenswrapper[4867]: I0214 04:15:20.038120 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-72mpc\" (UID: \"b967a9e8-e5f1-4c92-889a-1dd6adf747fd\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 04:15:20 crc kubenswrapper[4867]: E0214 04:15:20.038314 4867 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 14 04:15:20 crc kubenswrapper[4867]: E0214 04:15:20.038603 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates podName:b967a9e8-e5f1-4c92-889a-1dd6adf747fd nodeName:}" failed. No retries permitted until 2026-02-14 04:15:24.038581209 +0000 UTC m=+356.119518523 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-72mpc" (UID: "b967a9e8-e5f1-4c92-889a-1dd6adf747fd") : secret "prometheus-operator-admission-webhook-tls" not found Feb 14 04:15:20 crc kubenswrapper[4867]: I0214 04:15:20.681870 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" event={"ID":"cd15dd24-0b64-4213-842f-5727fdedffaf","Type":"ContainerStarted","Data":"dc49d46bbf08c3a8f13c574f31042497b2f838320abe428a9400869962f6a94d"} Feb 14 04:15:20 crc kubenswrapper[4867]: I0214 04:15:20.682132 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:20 crc kubenswrapper[4867]: I0214 04:15:20.683878 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" event={"ID":"460ab01d-a050-4210-8f77-1564c687b8aa","Type":"ContainerStarted","Data":"0d82750f6cb70aa5afe9e78ddf40f91e7f394d82ecfe5e44397ce38b6f93dbee"} Feb 14 04:15:20 crc kubenswrapper[4867]: I0214 04:15:20.684847 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:20 crc kubenswrapper[4867]: I0214 04:15:20.689875 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:20 crc kubenswrapper[4867]: I0214 04:15:20.694258 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:20 crc kubenswrapper[4867]: I0214 04:15:20.711013 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" podStartSLOduration=3.7109971379999998 podStartE2EDuration="3.710997138s" podCreationTimestamp="2026-02-14 04:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:15:20.710122685 +0000 UTC m=+352.791060019" watchObservedRunningTime="2026-02-14 04:15:20.710997138 +0000 UTC m=+352.791934452" Feb 14 04:15:20 crc kubenswrapper[4867]: I0214 04:15:20.764240 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" podStartSLOduration=3.764212958 podStartE2EDuration="3.764212958s" podCreationTimestamp="2026-02-14 04:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:15:20.760743538 +0000 UTC m=+352.841680942" watchObservedRunningTime="2026-02-14 04:15:20.764212958 +0000 UTC m=+352.845150292" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.257695 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c65kr"] Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.648331 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-wwh9m"] Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.649100 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.657922 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-wwh9m"] Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.777951 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwrfh\" (UniqueName: \"kubernetes.io/projected/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-kube-api-access-fwrfh\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.778252 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-registry-tls\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.778344 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.778451 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.778595 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-registry-certificates\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.778690 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-trusted-ca\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.778811 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-bound-sa-token\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.778913 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.800029 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.880040 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-registry-certificates\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.880083 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-trusted-ca\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.880138 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-bound-sa-token\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.880199 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwrfh\" (UniqueName: \"kubernetes.io/projected/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-kube-api-access-fwrfh\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.880258 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-registry-tls\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.880276 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.880316 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.881095 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.881466 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-registry-certificates\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.881501 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-trusted-ca\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.888863 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.889134 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-registry-tls\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.897495 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-bound-sa-token\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.898326 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwrfh\" (UniqueName: \"kubernetes.io/projected/bbf9502a-06eb-4e94-911a-3a7ac1426dd8-kube-api-access-fwrfh\") pod \"image-registry-66df7c8f76-wwh9m\" (UID: \"bbf9502a-06eb-4e94-911a-3a7ac1426dd8\") " pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:22 crc kubenswrapper[4867]: I0214 04:15:22.964791 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:23 crc kubenswrapper[4867]: I0214 04:15:23.349763 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-wwh9m"] Feb 14 04:15:23 crc kubenswrapper[4867]: W0214 04:15:23.357756 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbf9502a_06eb_4e94_911a_3a7ac1426dd8.slice/crio-1306214d36d4fc4cf450d2329be291ed5fafa54839f0cd306bfec562ccbebb67 WatchSource:0}: Error finding container 1306214d36d4fc4cf450d2329be291ed5fafa54839f0cd306bfec562ccbebb67: Status 404 returned error can't find the container with id 1306214d36d4fc4cf450d2329be291ed5fafa54839f0cd306bfec562ccbebb67 Feb 14 04:15:23 crc kubenswrapper[4867]: I0214 04:15:23.699105 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" event={"ID":"bbf9502a-06eb-4e94-911a-3a7ac1426dd8","Type":"ContainerStarted","Data":"972779f98658ff6da5bb0e972489175cb939a80dd58a4d23e04dc2b8617b4c65"} Feb 14 04:15:23 crc kubenswrapper[4867]: I0214 04:15:23.699461 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:23 crc kubenswrapper[4867]: I0214 04:15:23.699479 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" event={"ID":"bbf9502a-06eb-4e94-911a-3a7ac1426dd8","Type":"ContainerStarted","Data":"1306214d36d4fc4cf450d2329be291ed5fafa54839f0cd306bfec562ccbebb67"} Feb 14 04:15:23 crc kubenswrapper[4867]: I0214 04:15:23.719425 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" podStartSLOduration=1.7194003279999999 podStartE2EDuration="1.719400328s" podCreationTimestamp="2026-02-14 04:15:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:15:23.715351593 +0000 UTC m=+355.796288947" watchObservedRunningTime="2026-02-14 04:15:23.719400328 +0000 UTC m=+355.800337662" Feb 14 04:15:24 crc kubenswrapper[4867]: I0214 04:15:24.097225 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-72mpc\" (UID: \"b967a9e8-e5f1-4c92-889a-1dd6adf747fd\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 04:15:24 crc kubenswrapper[4867]: E0214 04:15:24.097395 4867 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 14 04:15:24 crc kubenswrapper[4867]: E0214 04:15:24.097455 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates podName:b967a9e8-e5f1-4c92-889a-1dd6adf747fd nodeName:}" failed. No retries permitted until 2026-02-14 04:15:32.097436542 +0000 UTC m=+364.178373856 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-72mpc" (UID: "b967a9e8-e5f1-4c92-889a-1dd6adf747fd") : secret "prometheus-operator-admission-webhook-tls" not found Feb 14 04:15:31 crc kubenswrapper[4867]: I0214 04:15:31.250938 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:15:31 crc kubenswrapper[4867]: I0214 04:15:31.251665 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:15:32 crc kubenswrapper[4867]: I0214 04:15:32.102469 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-72mpc\" (UID: \"b967a9e8-e5f1-4c92-889a-1dd6adf747fd\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 04:15:32 crc kubenswrapper[4867]: E0214 04:15:32.102744 4867 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: secret "prometheus-operator-admission-webhook-tls" not found Feb 14 04:15:32 crc kubenswrapper[4867]: E0214 04:15:32.102857 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates podName:b967a9e8-e5f1-4c92-889a-1dd6adf747fd nodeName:}" failed. No retries permitted until 2026-02-14 04:15:48.102820603 +0000 UTC m=+380.183757957 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates") pod "prometheus-operator-admission-webhook-f54c54754-72mpc" (UID: "b967a9e8-e5f1-4c92-889a-1dd6adf747fd") : secret "prometheus-operator-admission-webhook-tls" not found Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.133383 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mrccv"] Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.134336 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mrccv" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.136266 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.142975 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mrccv"] Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.216524 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0fe6db4-add0-4993-a40c-c5b6725565fa-catalog-content\") pod \"certified-operators-mrccv\" (UID: \"e0fe6db4-add0-4993-a40c-c5b6725565fa\") " pod="openshift-marketplace/certified-operators-mrccv" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.216880 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9jdb\" (UniqueName: \"kubernetes.io/projected/e0fe6db4-add0-4993-a40c-c5b6725565fa-kube-api-access-v9jdb\") pod \"certified-operators-mrccv\" (UID: \"e0fe6db4-add0-4993-a40c-c5b6725565fa\") " pod="openshift-marketplace/certified-operators-mrccv" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.216936 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0fe6db4-add0-4993-a40c-c5b6725565fa-utilities\") pod \"certified-operators-mrccv\" (UID: \"e0fe6db4-add0-4993-a40c-c5b6725565fa\") " pod="openshift-marketplace/certified-operators-mrccv" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.318363 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0fe6db4-add0-4993-a40c-c5b6725565fa-catalog-content\") pod \"certified-operators-mrccv\" (UID: \"e0fe6db4-add0-4993-a40c-c5b6725565fa\") " pod="openshift-marketplace/certified-operators-mrccv" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.318431 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9jdb\" (UniqueName: \"kubernetes.io/projected/e0fe6db4-add0-4993-a40c-c5b6725565fa-kube-api-access-v9jdb\") pod \"certified-operators-mrccv\" (UID: \"e0fe6db4-add0-4993-a40c-c5b6725565fa\") " pod="openshift-marketplace/certified-operators-mrccv" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.318483 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0fe6db4-add0-4993-a40c-c5b6725565fa-utilities\") pod \"certified-operators-mrccv\" (UID: \"e0fe6db4-add0-4993-a40c-c5b6725565fa\") " pod="openshift-marketplace/certified-operators-mrccv" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.318822 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0fe6db4-add0-4993-a40c-c5b6725565fa-catalog-content\") pod \"certified-operators-mrccv\" (UID: \"e0fe6db4-add0-4993-a40c-c5b6725565fa\") " pod="openshift-marketplace/certified-operators-mrccv" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.318899 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0fe6db4-add0-4993-a40c-c5b6725565fa-utilities\") pod \"certified-operators-mrccv\" (UID: \"e0fe6db4-add0-4993-a40c-c5b6725565fa\") " pod="openshift-marketplace/certified-operators-mrccv" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.330318 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w69fq"] Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.331303 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w69fq" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.335412 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.339542 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9jdb\" (UniqueName: \"kubernetes.io/projected/e0fe6db4-add0-4993-a40c-c5b6725565fa-kube-api-access-v9jdb\") pod \"certified-operators-mrccv\" (UID: \"e0fe6db4-add0-4993-a40c-c5b6725565fa\") " pod="openshift-marketplace/certified-operators-mrccv" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.341472 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w69fq"] Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.420207 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25nk5\" (UniqueName: \"kubernetes.io/projected/be125812-eeef-4043-bef9-fea01037dddb-kube-api-access-25nk5\") pod \"community-operators-w69fq\" (UID: \"be125812-eeef-4043-bef9-fea01037dddb\") " pod="openshift-marketplace/community-operators-w69fq" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.420371 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be125812-eeef-4043-bef9-fea01037dddb-catalog-content\") pod \"community-operators-w69fq\" (UID: \"be125812-eeef-4043-bef9-fea01037dddb\") " pod="openshift-marketplace/community-operators-w69fq" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.420414 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be125812-eeef-4043-bef9-fea01037dddb-utilities\") pod \"community-operators-w69fq\" (UID: \"be125812-eeef-4043-bef9-fea01037dddb\") " pod="openshift-marketplace/community-operators-w69fq" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.450859 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mrccv" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.521838 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be125812-eeef-4043-bef9-fea01037dddb-catalog-content\") pod \"community-operators-w69fq\" (UID: \"be125812-eeef-4043-bef9-fea01037dddb\") " pod="openshift-marketplace/community-operators-w69fq" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.521893 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be125812-eeef-4043-bef9-fea01037dddb-utilities\") pod \"community-operators-w69fq\" (UID: \"be125812-eeef-4043-bef9-fea01037dddb\") " pod="openshift-marketplace/community-operators-w69fq" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.521921 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25nk5\" (UniqueName: \"kubernetes.io/projected/be125812-eeef-4043-bef9-fea01037dddb-kube-api-access-25nk5\") pod \"community-operators-w69fq\" (UID: \"be125812-eeef-4043-bef9-fea01037dddb\") " pod="openshift-marketplace/community-operators-w69fq" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.522358 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be125812-eeef-4043-bef9-fea01037dddb-catalog-content\") pod \"community-operators-w69fq\" (UID: \"be125812-eeef-4043-bef9-fea01037dddb\") " pod="openshift-marketplace/community-operators-w69fq" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.522441 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be125812-eeef-4043-bef9-fea01037dddb-utilities\") pod \"community-operators-w69fq\" (UID: \"be125812-eeef-4043-bef9-fea01037dddb\") " pod="openshift-marketplace/community-operators-w69fq" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.542182 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25nk5\" (UniqueName: \"kubernetes.io/projected/be125812-eeef-4043-bef9-fea01037dddb-kube-api-access-25nk5\") pod \"community-operators-w69fq\" (UID: \"be125812-eeef-4043-bef9-fea01037dddb\") " pod="openshift-marketplace/community-operators-w69fq" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.664778 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w69fq" Feb 14 04:15:33 crc kubenswrapper[4867]: I0214 04:15:33.840614 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mrccv"] Feb 14 04:15:33 crc kubenswrapper[4867]: W0214 04:15:33.845990 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0fe6db4_add0_4993_a40c_c5b6725565fa.slice/crio-4f098b431d5934c21c393a1541a639e82160499facce797045d7a0dae4cf3873 WatchSource:0}: Error finding container 4f098b431d5934c21c393a1541a639e82160499facce797045d7a0dae4cf3873: Status 404 returned error can't find the container with id 4f098b431d5934c21c393a1541a639e82160499facce797045d7a0dae4cf3873 Feb 14 04:15:34 crc kubenswrapper[4867]: I0214 04:15:34.041667 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w69fq"] Feb 14 04:15:34 crc kubenswrapper[4867]: W0214 04:15:34.047822 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe125812_eeef_4043_bef9_fea01037dddb.slice/crio-e7632a4935f6e53caf49f08bcf459867c8d51565affe150d2dd91bb73296e20b WatchSource:0}: Error finding container e7632a4935f6e53caf49f08bcf459867c8d51565affe150d2dd91bb73296e20b: Status 404 returned error can't find the container with id e7632a4935f6e53caf49f08bcf459867c8d51565affe150d2dd91bb73296e20b Feb 14 04:15:34 crc kubenswrapper[4867]: I0214 04:15:34.757995 4867 generic.go:334] "Generic (PLEG): container finished" podID="be125812-eeef-4043-bef9-fea01037dddb" containerID="450f441c3fc59e9212fe447420930708c0698125bce6cd66d1552fe6d6695ba6" exitCode=0 Feb 14 04:15:34 crc kubenswrapper[4867]: I0214 04:15:34.758048 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w69fq" event={"ID":"be125812-eeef-4043-bef9-fea01037dddb","Type":"ContainerDied","Data":"450f441c3fc59e9212fe447420930708c0698125bce6cd66d1552fe6d6695ba6"} Feb 14 04:15:34 crc kubenswrapper[4867]: I0214 04:15:34.758570 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w69fq" event={"ID":"be125812-eeef-4043-bef9-fea01037dddb","Type":"ContainerStarted","Data":"e7632a4935f6e53caf49f08bcf459867c8d51565affe150d2dd91bb73296e20b"} Feb 14 04:15:34 crc kubenswrapper[4867]: I0214 04:15:34.760391 4867 generic.go:334] "Generic (PLEG): container finished" podID="e0fe6db4-add0-4993-a40c-c5b6725565fa" containerID="f7057ae1c4e2413e60ccd9e1345e2b034e0ca95a6196aeca71b8376a3e569f50" exitCode=0 Feb 14 04:15:34 crc kubenswrapper[4867]: I0214 04:15:34.760422 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrccv" event={"ID":"e0fe6db4-add0-4993-a40c-c5b6725565fa","Type":"ContainerDied","Data":"f7057ae1c4e2413e60ccd9e1345e2b034e0ca95a6196aeca71b8376a3e569f50"} Feb 14 04:15:34 crc kubenswrapper[4867]: I0214 04:15:34.760450 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrccv" event={"ID":"e0fe6db4-add0-4993-a40c-c5b6725565fa","Type":"ContainerStarted","Data":"4f098b431d5934c21c393a1541a639e82160499facce797045d7a0dae4cf3873"} Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.542035 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gbz8c"] Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.543369 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gbz8c" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.545946 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.556327 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbz8c"] Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.653710 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8fe62eb-932d-4b17-8ffa-6c90780bdd74-utilities\") pod \"redhat-marketplace-gbz8c\" (UID: \"c8fe62eb-932d-4b17-8ffa-6c90780bdd74\") " pod="openshift-marketplace/redhat-marketplace-gbz8c" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.653920 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8fe62eb-932d-4b17-8ffa-6c90780bdd74-catalog-content\") pod \"redhat-marketplace-gbz8c\" (UID: \"c8fe62eb-932d-4b17-8ffa-6c90780bdd74\") " pod="openshift-marketplace/redhat-marketplace-gbz8c" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.653949 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dppdt\" (UniqueName: \"kubernetes.io/projected/c8fe62eb-932d-4b17-8ffa-6c90780bdd74-kube-api-access-dppdt\") pod \"redhat-marketplace-gbz8c\" (UID: \"c8fe62eb-932d-4b17-8ffa-6c90780bdd74\") " pod="openshift-marketplace/redhat-marketplace-gbz8c" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.730656 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bvb8v"] Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.731586 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bvb8v" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.737041 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.745717 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bvb8v"] Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.755422 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8fe62eb-932d-4b17-8ffa-6c90780bdd74-catalog-content\") pod \"redhat-marketplace-gbz8c\" (UID: \"c8fe62eb-932d-4b17-8ffa-6c90780bdd74\") " pod="openshift-marketplace/redhat-marketplace-gbz8c" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.755462 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dppdt\" (UniqueName: \"kubernetes.io/projected/c8fe62eb-932d-4b17-8ffa-6c90780bdd74-kube-api-access-dppdt\") pod \"redhat-marketplace-gbz8c\" (UID: \"c8fe62eb-932d-4b17-8ffa-6c90780bdd74\") " pod="openshift-marketplace/redhat-marketplace-gbz8c" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.755537 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8fe62eb-932d-4b17-8ffa-6c90780bdd74-utilities\") pod \"redhat-marketplace-gbz8c\" (UID: \"c8fe62eb-932d-4b17-8ffa-6c90780bdd74\") " pod="openshift-marketplace/redhat-marketplace-gbz8c" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.756019 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8fe62eb-932d-4b17-8ffa-6c90780bdd74-utilities\") pod \"redhat-marketplace-gbz8c\" (UID: \"c8fe62eb-932d-4b17-8ffa-6c90780bdd74\") " pod="openshift-marketplace/redhat-marketplace-gbz8c" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.756019 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8fe62eb-932d-4b17-8ffa-6c90780bdd74-catalog-content\") pod \"redhat-marketplace-gbz8c\" (UID: \"c8fe62eb-932d-4b17-8ffa-6c90780bdd74\") " pod="openshift-marketplace/redhat-marketplace-gbz8c" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.765964 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrccv" event={"ID":"e0fe6db4-add0-4993-a40c-c5b6725565fa","Type":"ContainerStarted","Data":"8c1adc5e45fa6551a914874e9d31f9ae8f905ef3c1f028ce884edd5ee5d1cf3e"} Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.767595 4867 generic.go:334] "Generic (PLEG): container finished" podID="be125812-eeef-4043-bef9-fea01037dddb" containerID="ccfec040a9892d4263ee046f6330e18b2c143d33869eb2511259bf87aeda48cb" exitCode=0 Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.767631 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w69fq" event={"ID":"be125812-eeef-4043-bef9-fea01037dddb","Type":"ContainerDied","Data":"ccfec040a9892d4263ee046f6330e18b2c143d33869eb2511259bf87aeda48cb"} Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.780500 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dppdt\" (UniqueName: \"kubernetes.io/projected/c8fe62eb-932d-4b17-8ffa-6c90780bdd74-kube-api-access-dppdt\") pod \"redhat-marketplace-gbz8c\" (UID: \"c8fe62eb-932d-4b17-8ffa-6c90780bdd74\") " pod="openshift-marketplace/redhat-marketplace-gbz8c" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.856862 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/140d0152-99c5-425c-b956-595dea337206-utilities\") pod \"redhat-operators-bvb8v\" (UID: \"140d0152-99c5-425c-b956-595dea337206\") " pod="openshift-marketplace/redhat-operators-bvb8v" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.856952 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/140d0152-99c5-425c-b956-595dea337206-catalog-content\") pod \"redhat-operators-bvb8v\" (UID: \"140d0152-99c5-425c-b956-595dea337206\") " pod="openshift-marketplace/redhat-operators-bvb8v" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.856987 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk6f8\" (UniqueName: \"kubernetes.io/projected/140d0152-99c5-425c-b956-595dea337206-kube-api-access-bk6f8\") pod \"redhat-operators-bvb8v\" (UID: \"140d0152-99c5-425c-b956-595dea337206\") " pod="openshift-marketplace/redhat-operators-bvb8v" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.860703 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gbz8c" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.957809 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/140d0152-99c5-425c-b956-595dea337206-catalog-content\") pod \"redhat-operators-bvb8v\" (UID: \"140d0152-99c5-425c-b956-595dea337206\") " pod="openshift-marketplace/redhat-operators-bvb8v" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.958128 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk6f8\" (UniqueName: \"kubernetes.io/projected/140d0152-99c5-425c-b956-595dea337206-kube-api-access-bk6f8\") pod \"redhat-operators-bvb8v\" (UID: \"140d0152-99c5-425c-b956-595dea337206\") " pod="openshift-marketplace/redhat-operators-bvb8v" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.958217 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/140d0152-99c5-425c-b956-595dea337206-utilities\") pod \"redhat-operators-bvb8v\" (UID: \"140d0152-99c5-425c-b956-595dea337206\") " pod="openshift-marketplace/redhat-operators-bvb8v" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.958968 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/140d0152-99c5-425c-b956-595dea337206-utilities\") pod \"redhat-operators-bvb8v\" (UID: \"140d0152-99c5-425c-b956-595dea337206\") " pod="openshift-marketplace/redhat-operators-bvb8v" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.959141 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/140d0152-99c5-425c-b956-595dea337206-catalog-content\") pod \"redhat-operators-bvb8v\" (UID: \"140d0152-99c5-425c-b956-595dea337206\") " pod="openshift-marketplace/redhat-operators-bvb8v" Feb 14 04:15:35 crc kubenswrapper[4867]: I0214 04:15:35.975583 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk6f8\" (UniqueName: \"kubernetes.io/projected/140d0152-99c5-425c-b956-595dea337206-kube-api-access-bk6f8\") pod \"redhat-operators-bvb8v\" (UID: \"140d0152-99c5-425c-b956-595dea337206\") " pod="openshift-marketplace/redhat-operators-bvb8v" Feb 14 04:15:36 crc kubenswrapper[4867]: I0214 04:15:36.046984 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bvb8v" Feb 14 04:15:36 crc kubenswrapper[4867]: I0214 04:15:36.237444 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbz8c"] Feb 14 04:15:36 crc kubenswrapper[4867]: W0214 04:15:36.246851 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8fe62eb_932d_4b17_8ffa_6c90780bdd74.slice/crio-909a4d6ba1cd96b4355f13c8201e808244da2a8bf25edf4c0728314815252c0a WatchSource:0}: Error finding container 909a4d6ba1cd96b4355f13c8201e808244da2a8bf25edf4c0728314815252c0a: Status 404 returned error can't find the container with id 909a4d6ba1cd96b4355f13c8201e808244da2a8bf25edf4c0728314815252c0a Feb 14 04:15:36 crc kubenswrapper[4867]: I0214 04:15:36.407654 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bvb8v"] Feb 14 04:15:36 crc kubenswrapper[4867]: I0214 04:15:36.775714 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w69fq" event={"ID":"be125812-eeef-4043-bef9-fea01037dddb","Type":"ContainerStarted","Data":"69ab4f23480ad187e639a58fd17104be8c48c506d1ee3c45267693b5ee9cc4a9"} Feb 14 04:15:36 crc kubenswrapper[4867]: I0214 04:15:36.778173 4867 generic.go:334] "Generic (PLEG): container finished" podID="c8fe62eb-932d-4b17-8ffa-6c90780bdd74" containerID="bc39cfe6c4e3f56df9c5948f4fa345c452ace9e770ea0fba08c4fc6389bc05b2" exitCode=0 Feb 14 04:15:36 crc kubenswrapper[4867]: I0214 04:15:36.778215 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbz8c" event={"ID":"c8fe62eb-932d-4b17-8ffa-6c90780bdd74","Type":"ContainerDied","Data":"bc39cfe6c4e3f56df9c5948f4fa345c452ace9e770ea0fba08c4fc6389bc05b2"} Feb 14 04:15:36 crc kubenswrapper[4867]: I0214 04:15:36.778248 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbz8c" event={"ID":"c8fe62eb-932d-4b17-8ffa-6c90780bdd74","Type":"ContainerStarted","Data":"909a4d6ba1cd96b4355f13c8201e808244da2a8bf25edf4c0728314815252c0a"} Feb 14 04:15:36 crc kubenswrapper[4867]: I0214 04:15:36.779881 4867 generic.go:334] "Generic (PLEG): container finished" podID="140d0152-99c5-425c-b956-595dea337206" containerID="ccb9fa229a0d0673ab8663782ef04bf45bc05fe821571028056bb1469529e936" exitCode=0 Feb 14 04:15:36 crc kubenswrapper[4867]: I0214 04:15:36.779945 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bvb8v" event={"ID":"140d0152-99c5-425c-b956-595dea337206","Type":"ContainerDied","Data":"ccb9fa229a0d0673ab8663782ef04bf45bc05fe821571028056bb1469529e936"} Feb 14 04:15:36 crc kubenswrapper[4867]: I0214 04:15:36.779975 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bvb8v" event={"ID":"140d0152-99c5-425c-b956-595dea337206","Type":"ContainerStarted","Data":"1f273fc0233825535c5879cadfe14d979a0b834e11cf19e55eeaf980caad47ed"} Feb 14 04:15:36 crc kubenswrapper[4867]: I0214 04:15:36.782790 4867 generic.go:334] "Generic (PLEG): container finished" podID="e0fe6db4-add0-4993-a40c-c5b6725565fa" containerID="8c1adc5e45fa6551a914874e9d31f9ae8f905ef3c1f028ce884edd5ee5d1cf3e" exitCode=0 Feb 14 04:15:36 crc kubenswrapper[4867]: I0214 04:15:36.782832 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrccv" event={"ID":"e0fe6db4-add0-4993-a40c-c5b6725565fa","Type":"ContainerDied","Data":"8c1adc5e45fa6551a914874e9d31f9ae8f905ef3c1f028ce884edd5ee5d1cf3e"} Feb 14 04:15:36 crc kubenswrapper[4867]: I0214 04:15:36.795417 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w69fq" podStartSLOduration=2.3673676869999998 podStartE2EDuration="3.795400911s" podCreationTimestamp="2026-02-14 04:15:33 +0000 UTC" firstStartedPulling="2026-02-14 04:15:34.759954224 +0000 UTC m=+366.840891538" lastFinishedPulling="2026-02-14 04:15:36.187987448 +0000 UTC m=+368.268924762" observedRunningTime="2026-02-14 04:15:36.794176069 +0000 UTC m=+368.875113393" watchObservedRunningTime="2026-02-14 04:15:36.795400911 +0000 UTC m=+368.876338225" Feb 14 04:15:36 crc kubenswrapper[4867]: I0214 04:15:36.988172 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-645fd87585-cg7sr"] Feb 14 04:15:36 crc kubenswrapper[4867]: I0214 04:15:36.988408 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" podUID="460ab01d-a050-4210-8f77-1564c687b8aa" containerName="controller-manager" containerID="cri-o://0d82750f6cb70aa5afe9e78ddf40f91e7f394d82ecfe5e44397ce38b6f93dbee" gracePeriod=30 Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.020344 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx"] Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.020618 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" podUID="cd15dd24-0b64-4213-842f-5727fdedffaf" containerName="route-controller-manager" containerID="cri-o://dc49d46bbf08c3a8f13c574f31042497b2f838320abe428a9400869962f6a94d" gracePeriod=30 Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.576170 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.666494 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.681158 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd15dd24-0b64-4213-842f-5727fdedffaf-serving-cert\") pod \"cd15dd24-0b64-4213-842f-5727fdedffaf\" (UID: \"cd15dd24-0b64-4213-842f-5727fdedffaf\") " Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.681296 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhmp7\" (UniqueName: \"kubernetes.io/projected/cd15dd24-0b64-4213-842f-5727fdedffaf-kube-api-access-qhmp7\") pod \"cd15dd24-0b64-4213-842f-5727fdedffaf\" (UID: \"cd15dd24-0b64-4213-842f-5727fdedffaf\") " Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.681357 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd15dd24-0b64-4213-842f-5727fdedffaf-client-ca\") pod \"cd15dd24-0b64-4213-842f-5727fdedffaf\" (UID: \"cd15dd24-0b64-4213-842f-5727fdedffaf\") " Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.681382 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd15dd24-0b64-4213-842f-5727fdedffaf-config\") pod \"cd15dd24-0b64-4213-842f-5727fdedffaf\" (UID: \"cd15dd24-0b64-4213-842f-5727fdedffaf\") " Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.681981 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd15dd24-0b64-4213-842f-5727fdedffaf-client-ca" (OuterVolumeSpecName: "client-ca") pod "cd15dd24-0b64-4213-842f-5727fdedffaf" (UID: "cd15dd24-0b64-4213-842f-5727fdedffaf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.682017 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd15dd24-0b64-4213-842f-5727fdedffaf-config" (OuterVolumeSpecName: "config") pod "cd15dd24-0b64-4213-842f-5727fdedffaf" (UID: "cd15dd24-0b64-4213-842f-5727fdedffaf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.690697 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd15dd24-0b64-4213-842f-5727fdedffaf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cd15dd24-0b64-4213-842f-5727fdedffaf" (UID: "cd15dd24-0b64-4213-842f-5727fdedffaf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.698752 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd15dd24-0b64-4213-842f-5727fdedffaf-kube-api-access-qhmp7" (OuterVolumeSpecName: "kube-api-access-qhmp7") pod "cd15dd24-0b64-4213-842f-5727fdedffaf" (UID: "cd15dd24-0b64-4213-842f-5727fdedffaf"). InnerVolumeSpecName "kube-api-access-qhmp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.782342 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-config\") pod \"460ab01d-a050-4210-8f77-1564c687b8aa\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.782395 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/460ab01d-a050-4210-8f77-1564c687b8aa-serving-cert\") pod \"460ab01d-a050-4210-8f77-1564c687b8aa\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.782463 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lv2fh\" (UniqueName: \"kubernetes.io/projected/460ab01d-a050-4210-8f77-1564c687b8aa-kube-api-access-lv2fh\") pod \"460ab01d-a050-4210-8f77-1564c687b8aa\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.782520 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-client-ca\") pod \"460ab01d-a050-4210-8f77-1564c687b8aa\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.782541 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-proxy-ca-bundles\") pod \"460ab01d-a050-4210-8f77-1564c687b8aa\" (UID: \"460ab01d-a050-4210-8f77-1564c687b8aa\") " Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.782799 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhmp7\" (UniqueName: \"kubernetes.io/projected/cd15dd24-0b64-4213-842f-5727fdedffaf-kube-api-access-qhmp7\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.782811 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd15dd24-0b64-4213-842f-5727fdedffaf-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.782824 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd15dd24-0b64-4213-842f-5727fdedffaf-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.782834 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd15dd24-0b64-4213-842f-5727fdedffaf-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.783183 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-config" (OuterVolumeSpecName: "config") pod "460ab01d-a050-4210-8f77-1564c687b8aa" (UID: "460ab01d-a050-4210-8f77-1564c687b8aa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.783207 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "460ab01d-a050-4210-8f77-1564c687b8aa" (UID: "460ab01d-a050-4210-8f77-1564c687b8aa"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.783317 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-client-ca" (OuterVolumeSpecName: "client-ca") pod "460ab01d-a050-4210-8f77-1564c687b8aa" (UID: "460ab01d-a050-4210-8f77-1564c687b8aa"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.788655 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/460ab01d-a050-4210-8f77-1564c687b8aa-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "460ab01d-a050-4210-8f77-1564c687b8aa" (UID: "460ab01d-a050-4210-8f77-1564c687b8aa"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.788852 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/460ab01d-a050-4210-8f77-1564c687b8aa-kube-api-access-lv2fh" (OuterVolumeSpecName: "kube-api-access-lv2fh") pod "460ab01d-a050-4210-8f77-1564c687b8aa" (UID: "460ab01d-a050-4210-8f77-1564c687b8aa"). InnerVolumeSpecName "kube-api-access-lv2fh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.793500 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrccv" event={"ID":"e0fe6db4-add0-4993-a40c-c5b6725565fa","Type":"ContainerStarted","Data":"5a18a56f3dda9e5462434b66a63a51cc809ec7dc9d7b1183267bce6297e94690"} Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.795401 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbz8c" event={"ID":"c8fe62eb-932d-4b17-8ffa-6c90780bdd74","Type":"ContainerStarted","Data":"e536bcf6e044b2186c74c96bf70e1fc9fdbed61298a6d9edb177cbaf3be1ab21"} Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.796900 4867 generic.go:334] "Generic (PLEG): container finished" podID="cd15dd24-0b64-4213-842f-5727fdedffaf" containerID="dc49d46bbf08c3a8f13c574f31042497b2f838320abe428a9400869962f6a94d" exitCode=0 Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.796937 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" event={"ID":"cd15dd24-0b64-4213-842f-5727fdedffaf","Type":"ContainerDied","Data":"dc49d46bbf08c3a8f13c574f31042497b2f838320abe428a9400869962f6a94d"} Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.796961 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.796982 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx" event={"ID":"cd15dd24-0b64-4213-842f-5727fdedffaf","Type":"ContainerDied","Data":"f67f6d6f5857e795b0810abf5a8af2c6365a5e6e9a844dfa3bbdd069b8dcceb1"} Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.797006 4867 scope.go:117] "RemoveContainer" containerID="dc49d46bbf08c3a8f13c574f31042497b2f838320abe428a9400869962f6a94d" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.798275 4867 generic.go:334] "Generic (PLEG): container finished" podID="460ab01d-a050-4210-8f77-1564c687b8aa" containerID="0d82750f6cb70aa5afe9e78ddf40f91e7f394d82ecfe5e44397ce38b6f93dbee" exitCode=0 Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.798323 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.798345 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" event={"ID":"460ab01d-a050-4210-8f77-1564c687b8aa","Type":"ContainerDied","Data":"0d82750f6cb70aa5afe9e78ddf40f91e7f394d82ecfe5e44397ce38b6f93dbee"} Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.799797 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-645fd87585-cg7sr" event={"ID":"460ab01d-a050-4210-8f77-1564c687b8aa","Type":"ContainerDied","Data":"ec17f4737d4f6752779dbdb60d879bee862c16976ddcbbba41458c6d682fa9fe"} Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.806842 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bvb8v" event={"ID":"140d0152-99c5-425c-b956-595dea337206","Type":"ContainerStarted","Data":"fe1d1c3d5c0a2edd6e41c4c3e268598df1771a8ec2436a1ec87fa2eead423423"} Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.815361 4867 scope.go:117] "RemoveContainer" containerID="dc49d46bbf08c3a8f13c574f31042497b2f838320abe428a9400869962f6a94d" Feb 14 04:15:37 crc kubenswrapper[4867]: E0214 04:15:37.815908 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc49d46bbf08c3a8f13c574f31042497b2f838320abe428a9400869962f6a94d\": container with ID starting with dc49d46bbf08c3a8f13c574f31042497b2f838320abe428a9400869962f6a94d not found: ID does not exist" containerID="dc49d46bbf08c3a8f13c574f31042497b2f838320abe428a9400869962f6a94d" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.815935 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc49d46bbf08c3a8f13c574f31042497b2f838320abe428a9400869962f6a94d"} err="failed to get container status \"dc49d46bbf08c3a8f13c574f31042497b2f838320abe428a9400869962f6a94d\": rpc error: code = NotFound desc = could not find container \"dc49d46bbf08c3a8f13c574f31042497b2f838320abe428a9400869962f6a94d\": container with ID starting with dc49d46bbf08c3a8f13c574f31042497b2f838320abe428a9400869962f6a94d not found: ID does not exist" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.815957 4867 scope.go:117] "RemoveContainer" containerID="0d82750f6cb70aa5afe9e78ddf40f91e7f394d82ecfe5e44397ce38b6f93dbee" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.819005 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mrccv" podStartSLOduration=2.413322938 podStartE2EDuration="4.818983007s" podCreationTimestamp="2026-02-14 04:15:33 +0000 UTC" firstStartedPulling="2026-02-14 04:15:34.761923985 +0000 UTC m=+366.842861299" lastFinishedPulling="2026-02-14 04:15:37.167584054 +0000 UTC m=+369.248521368" observedRunningTime="2026-02-14 04:15:37.814425499 +0000 UTC m=+369.895362833" watchObservedRunningTime="2026-02-14 04:15:37.818983007 +0000 UTC m=+369.899920321" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.831362 4867 scope.go:117] "RemoveContainer" containerID="0d82750f6cb70aa5afe9e78ddf40f91e7f394d82ecfe5e44397ce38b6f93dbee" Feb 14 04:15:37 crc kubenswrapper[4867]: E0214 04:15:37.831946 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d82750f6cb70aa5afe9e78ddf40f91e7f394d82ecfe5e44397ce38b6f93dbee\": container with ID starting with 0d82750f6cb70aa5afe9e78ddf40f91e7f394d82ecfe5e44397ce38b6f93dbee not found: ID does not exist" containerID="0d82750f6cb70aa5afe9e78ddf40f91e7f394d82ecfe5e44397ce38b6f93dbee" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.831990 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d82750f6cb70aa5afe9e78ddf40f91e7f394d82ecfe5e44397ce38b6f93dbee"} err="failed to get container status \"0d82750f6cb70aa5afe9e78ddf40f91e7f394d82ecfe5e44397ce38b6f93dbee\": rpc error: code = NotFound desc = could not find container \"0d82750f6cb70aa5afe9e78ddf40f91e7f394d82ecfe5e44397ce38b6f93dbee\": container with ID starting with 0d82750f6cb70aa5afe9e78ddf40f91e7f394d82ecfe5e44397ce38b6f93dbee not found: ID does not exist" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.885234 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-645fd87585-cg7sr"] Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.887909 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.887962 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/460ab01d-a050-4210-8f77-1564c687b8aa-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.887978 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lv2fh\" (UniqueName: \"kubernetes.io/projected/460ab01d-a050-4210-8f77-1564c687b8aa-kube-api-access-lv2fh\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.887995 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.888007 4867 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/460ab01d-a050-4210-8f77-1564c687b8aa-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.896499 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-645fd87585-cg7sr"] Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.906344 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx"] Feb 14 04:15:37 crc kubenswrapper[4867]: I0214 04:15:37.911043 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dd4d98c55-vl8mx"] Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.812226 4867 generic.go:334] "Generic (PLEG): container finished" podID="c8fe62eb-932d-4b17-8ffa-6c90780bdd74" containerID="e536bcf6e044b2186c74c96bf70e1fc9fdbed61298a6d9edb177cbaf3be1ab21" exitCode=0 Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.812276 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbz8c" event={"ID":"c8fe62eb-932d-4b17-8ffa-6c90780bdd74","Type":"ContainerDied","Data":"e536bcf6e044b2186c74c96bf70e1fc9fdbed61298a6d9edb177cbaf3be1ab21"} Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.817031 4867 generic.go:334] "Generic (PLEG): container finished" podID="140d0152-99c5-425c-b956-595dea337206" containerID="fe1d1c3d5c0a2edd6e41c4c3e268598df1771a8ec2436a1ec87fa2eead423423" exitCode=0 Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.818004 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bvb8v" event={"ID":"140d0152-99c5-425c-b956-595dea337206","Type":"ContainerDied","Data":"fe1d1c3d5c0a2edd6e41c4c3e268598df1771a8ec2436a1ec87fa2eead423423"} Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.949810 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27"] Feb 14 04:15:38 crc kubenswrapper[4867]: E0214 04:15:38.950069 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="460ab01d-a050-4210-8f77-1564c687b8aa" containerName="controller-manager" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.950084 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="460ab01d-a050-4210-8f77-1564c687b8aa" containerName="controller-manager" Feb 14 04:15:38 crc kubenswrapper[4867]: E0214 04:15:38.950095 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd15dd24-0b64-4213-842f-5727fdedffaf" containerName="route-controller-manager" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.950102 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd15dd24-0b64-4213-842f-5727fdedffaf" containerName="route-controller-manager" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.950235 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="460ab01d-a050-4210-8f77-1564c687b8aa" containerName="controller-manager" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.950257 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd15dd24-0b64-4213-842f-5727fdedffaf" containerName="route-controller-manager" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.950715 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.955071 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-64cd899fff-wknv7"] Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.956054 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.958535 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27"] Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.962646 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64cd899fff-wknv7"] Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.963229 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.963348 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.963348 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.963380 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.963412 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.963496 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.963374 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.963558 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.963622 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.963671 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.964241 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.965796 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 14 04:15:38 crc kubenswrapper[4867]: I0214 04:15:38.969308 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.017318 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="460ab01d-a050-4210-8f77-1564c687b8aa" path="/var/lib/kubelet/pods/460ab01d-a050-4210-8f77-1564c687b8aa/volumes" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.018014 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd15dd24-0b64-4213-842f-5727fdedffaf" path="/var/lib/kubelet/pods/cd15dd24-0b64-4213-842f-5727fdedffaf/volumes" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.105316 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-proxy-ca-bundles\") pod \"controller-manager-64cd899fff-wknv7\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.105416 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-config\") pod \"controller-manager-64cd899fff-wknv7\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.105443 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e06bd216-b4b8-4754-a364-76f41991e155-client-ca\") pod \"route-controller-manager-856f7b9d6f-8fm27\" (UID: \"e06bd216-b4b8-4754-a364-76f41991e155\") " pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.105491 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-client-ca\") pod \"controller-manager-64cd899fff-wknv7\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.105531 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4ck6\" (UniqueName: \"kubernetes.io/projected/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-kube-api-access-x4ck6\") pod \"controller-manager-64cd899fff-wknv7\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.105660 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-serving-cert\") pod \"controller-manager-64cd899fff-wknv7\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.105739 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e06bd216-b4b8-4754-a364-76f41991e155-serving-cert\") pod \"route-controller-manager-856f7b9d6f-8fm27\" (UID: \"e06bd216-b4b8-4754-a364-76f41991e155\") " pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.105898 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp5z6\" (UniqueName: \"kubernetes.io/projected/e06bd216-b4b8-4754-a364-76f41991e155-kube-api-access-gp5z6\") pod \"route-controller-manager-856f7b9d6f-8fm27\" (UID: \"e06bd216-b4b8-4754-a364-76f41991e155\") " pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.105988 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e06bd216-b4b8-4754-a364-76f41991e155-config\") pod \"route-controller-manager-856f7b9d6f-8fm27\" (UID: \"e06bd216-b4b8-4754-a364-76f41991e155\") " pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.206691 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-client-ca\") pod \"controller-manager-64cd899fff-wknv7\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.206735 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4ck6\" (UniqueName: \"kubernetes.io/projected/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-kube-api-access-x4ck6\") pod \"controller-manager-64cd899fff-wknv7\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.206780 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-serving-cert\") pod \"controller-manager-64cd899fff-wknv7\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.206801 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e06bd216-b4b8-4754-a364-76f41991e155-serving-cert\") pod \"route-controller-manager-856f7b9d6f-8fm27\" (UID: \"e06bd216-b4b8-4754-a364-76f41991e155\") " pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.206831 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp5z6\" (UniqueName: \"kubernetes.io/projected/e06bd216-b4b8-4754-a364-76f41991e155-kube-api-access-gp5z6\") pod \"route-controller-manager-856f7b9d6f-8fm27\" (UID: \"e06bd216-b4b8-4754-a364-76f41991e155\") " pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.206856 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e06bd216-b4b8-4754-a364-76f41991e155-config\") pod \"route-controller-manager-856f7b9d6f-8fm27\" (UID: \"e06bd216-b4b8-4754-a364-76f41991e155\") " pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.206891 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-proxy-ca-bundles\") pod \"controller-manager-64cd899fff-wknv7\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.206911 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-config\") pod \"controller-manager-64cd899fff-wknv7\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.206925 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e06bd216-b4b8-4754-a364-76f41991e155-client-ca\") pod \"route-controller-manager-856f7b9d6f-8fm27\" (UID: \"e06bd216-b4b8-4754-a364-76f41991e155\") " pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.207710 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-client-ca\") pod \"controller-manager-64cd899fff-wknv7\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.207982 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e06bd216-b4b8-4754-a364-76f41991e155-client-ca\") pod \"route-controller-manager-856f7b9d6f-8fm27\" (UID: \"e06bd216-b4b8-4754-a364-76f41991e155\") " pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.208236 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e06bd216-b4b8-4754-a364-76f41991e155-config\") pod \"route-controller-manager-856f7b9d6f-8fm27\" (UID: \"e06bd216-b4b8-4754-a364-76f41991e155\") " pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.209018 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-proxy-ca-bundles\") pod \"controller-manager-64cd899fff-wknv7\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.209291 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-config\") pod \"controller-manager-64cd899fff-wknv7\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.215298 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-serving-cert\") pod \"controller-manager-64cd899fff-wknv7\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.226251 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e06bd216-b4b8-4754-a364-76f41991e155-serving-cert\") pod \"route-controller-manager-856f7b9d6f-8fm27\" (UID: \"e06bd216-b4b8-4754-a364-76f41991e155\") " pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.232374 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4ck6\" (UniqueName: \"kubernetes.io/projected/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-kube-api-access-x4ck6\") pod \"controller-manager-64cd899fff-wknv7\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.243087 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp5z6\" (UniqueName: \"kubernetes.io/projected/e06bd216-b4b8-4754-a364-76f41991e155-kube-api-access-gp5z6\") pod \"route-controller-manager-856f7b9d6f-8fm27\" (UID: \"e06bd216-b4b8-4754-a364-76f41991e155\") " pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.269607 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:39 crc kubenswrapper[4867]: I0214 04:15:39.282915 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:40 crc kubenswrapper[4867]: I0214 04:15:39.461132 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27"] Feb 14 04:15:40 crc kubenswrapper[4867]: I0214 04:15:39.846029 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbz8c" event={"ID":"c8fe62eb-932d-4b17-8ffa-6c90780bdd74","Type":"ContainerStarted","Data":"b65587de43aa6ea02405a8183ab53782da2064888cd423d8e57c6b42b146f30e"} Feb 14 04:15:40 crc kubenswrapper[4867]: I0214 04:15:39.850759 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bvb8v" event={"ID":"140d0152-99c5-425c-b956-595dea337206","Type":"ContainerStarted","Data":"d8dc2df6324b08cc38cc32dd78258391d8945bcaac442105f07f930438bed3e2"} Feb 14 04:15:40 crc kubenswrapper[4867]: I0214 04:15:39.852126 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" event={"ID":"e06bd216-b4b8-4754-a364-76f41991e155","Type":"ContainerStarted","Data":"a4e095f624f44728d2a3fc2a1dc0256cd3539ee32a625d8e202f8d80d7e3e7de"} Feb 14 04:15:40 crc kubenswrapper[4867]: I0214 04:15:39.852147 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" event={"ID":"e06bd216-b4b8-4754-a364-76f41991e155","Type":"ContainerStarted","Data":"2fd954bb3352333915558ae50d85ba987c86894ea554a0ffcd81defc26a5063c"} Feb 14 04:15:40 crc kubenswrapper[4867]: I0214 04:15:39.852388 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:40 crc kubenswrapper[4867]: I0214 04:15:39.873742 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gbz8c" podStartSLOduration=2.388542174 podStartE2EDuration="4.873721384s" podCreationTimestamp="2026-02-14 04:15:35 +0000 UTC" firstStartedPulling="2026-02-14 04:15:36.779425407 +0000 UTC m=+368.860362731" lastFinishedPulling="2026-02-14 04:15:39.264604627 +0000 UTC m=+371.345541941" observedRunningTime="2026-02-14 04:15:39.869811073 +0000 UTC m=+371.950748377" watchObservedRunningTime="2026-02-14 04:15:39.873721384 +0000 UTC m=+371.954658708" Feb 14 04:15:40 crc kubenswrapper[4867]: I0214 04:15:39.898476 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bvb8v" podStartSLOduration=2.133193932 podStartE2EDuration="4.898457666s" podCreationTimestamp="2026-02-14 04:15:35 +0000 UTC" firstStartedPulling="2026-02-14 04:15:36.781039109 +0000 UTC m=+368.861976423" lastFinishedPulling="2026-02-14 04:15:39.546302843 +0000 UTC m=+371.627240157" observedRunningTime="2026-02-14 04:15:39.894876093 +0000 UTC m=+371.975813427" watchObservedRunningTime="2026-02-14 04:15:39.898457666 +0000 UTC m=+371.979394970" Feb 14 04:15:40 crc kubenswrapper[4867]: I0214 04:15:40.001481 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:40 crc kubenswrapper[4867]: I0214 04:15:40.018815 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" podStartSLOduration=3.018760056 podStartE2EDuration="3.018760056s" podCreationTimestamp="2026-02-14 04:15:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:15:39.917993303 +0000 UTC m=+371.998930617" watchObservedRunningTime="2026-02-14 04:15:40.018760056 +0000 UTC m=+372.099697370" Feb 14 04:15:40 crc kubenswrapper[4867]: I0214 04:15:40.565026 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64cd899fff-wknv7"] Feb 14 04:15:40 crc kubenswrapper[4867]: W0214 04:15:40.569646 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b0aebec_9a44_4db3_9bcb_e63c5f1748c8.slice/crio-9b0735e90ad616b3867ce1cf48d98caaf6b478aa90e26b262b52fd8cb6e1c8ca WatchSource:0}: Error finding container 9b0735e90ad616b3867ce1cf48d98caaf6b478aa90e26b262b52fd8cb6e1c8ca: Status 404 returned error can't find the container with id 9b0735e90ad616b3867ce1cf48d98caaf6b478aa90e26b262b52fd8cb6e1c8ca Feb 14 04:15:40 crc kubenswrapper[4867]: I0214 04:15:40.858406 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" event={"ID":"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8","Type":"ContainerStarted","Data":"f048a7044e6a5e0c4f276f047bccbb72c43ee0e536c6a2c0efeb288de5790980"} Feb 14 04:15:40 crc kubenswrapper[4867]: I0214 04:15:40.858726 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" event={"ID":"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8","Type":"ContainerStarted","Data":"9b0735e90ad616b3867ce1cf48d98caaf6b478aa90e26b262b52fd8cb6e1c8ca"} Feb 14 04:15:40 crc kubenswrapper[4867]: I0214 04:15:40.886457 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" podStartSLOduration=3.886440929 podStartE2EDuration="3.886440929s" podCreationTimestamp="2026-02-14 04:15:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:15:40.882723322 +0000 UTC m=+372.963660646" watchObservedRunningTime="2026-02-14 04:15:40.886440929 +0000 UTC m=+372.967378243" Feb 14 04:15:41 crc kubenswrapper[4867]: I0214 04:15:41.863033 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:41 crc kubenswrapper[4867]: I0214 04:15:41.867143 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:42 crc kubenswrapper[4867]: I0214 04:15:42.969817 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" Feb 14 04:15:43 crc kubenswrapper[4867]: I0214 04:15:43.013210 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5rxcg"] Feb 14 04:15:43 crc kubenswrapper[4867]: I0214 04:15:43.451612 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mrccv" Feb 14 04:15:43 crc kubenswrapper[4867]: I0214 04:15:43.451969 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mrccv" Feb 14 04:15:43 crc kubenswrapper[4867]: I0214 04:15:43.498569 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mrccv" Feb 14 04:15:43 crc kubenswrapper[4867]: I0214 04:15:43.665359 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w69fq" Feb 14 04:15:43 crc kubenswrapper[4867]: I0214 04:15:43.665703 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w69fq" Feb 14 04:15:43 crc kubenswrapper[4867]: I0214 04:15:43.701863 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w69fq" Feb 14 04:15:43 crc kubenswrapper[4867]: I0214 04:15:43.907222 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w69fq" Feb 14 04:15:43 crc kubenswrapper[4867]: I0214 04:15:43.911181 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mrccv" Feb 14 04:15:45 crc kubenswrapper[4867]: I0214 04:15:45.861619 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gbz8c" Feb 14 04:15:45 crc kubenswrapper[4867]: I0214 04:15:45.861696 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gbz8c" Feb 14 04:15:45 crc kubenswrapper[4867]: I0214 04:15:45.899365 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gbz8c" Feb 14 04:15:45 crc kubenswrapper[4867]: I0214 04:15:45.945182 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gbz8c" Feb 14 04:15:46 crc kubenswrapper[4867]: I0214 04:15:46.048328 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bvb8v" Feb 14 04:15:46 crc kubenswrapper[4867]: I0214 04:15:46.048393 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bvb8v" Feb 14 04:15:46 crc kubenswrapper[4867]: I0214 04:15:46.095000 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bvb8v" Feb 14 04:15:46 crc kubenswrapper[4867]: I0214 04:15:46.935389 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bvb8v" Feb 14 04:15:47 crc kubenswrapper[4867]: I0214 04:15:47.298484 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" podUID="0ad7b333-6328-41ea-a81d-bce9790b185a" containerName="oauth-openshift" containerID="cri-o://271deed38181d3d03a61bb60c701b3fc845d6907348df479c58ecd82b90d57ea" gracePeriod=15 Feb 14 04:15:47 crc kubenswrapper[4867]: I0214 04:15:47.652961 4867 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-c65kr container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" start-of-body= Feb 14 04:15:47 crc kubenswrapper[4867]: I0214 04:15:47.653019 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" podUID="0ad7b333-6328-41ea-a81d-bce9790b185a" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.34:6443/healthz\": dial tcp 10.217.0.34:6443: connect: connection refused" Feb 14 04:15:48 crc kubenswrapper[4867]: I0214 04:15:48.121045 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-72mpc\" (UID: \"b967a9e8-e5f1-4c92-889a-1dd6adf747fd\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 04:15:48 crc kubenswrapper[4867]: I0214 04:15:48.127299 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/b967a9e8-e5f1-4c92-889a-1dd6adf747fd-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-72mpc\" (UID: \"b967a9e8-e5f1-4c92-889a-1dd6adf747fd\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 04:15:48 crc kubenswrapper[4867]: I0214 04:15:48.389490 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-7rsz8" Feb 14 04:15:48 crc kubenswrapper[4867]: I0214 04:15:48.397993 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 04:15:48 crc kubenswrapper[4867]: I0214 04:15:48.784756 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc"] Feb 14 04:15:48 crc kubenswrapper[4867]: I0214 04:15:48.901957 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" event={"ID":"b967a9e8-e5f1-4c92-889a-1dd6adf747fd","Type":"ContainerStarted","Data":"55d18515117e753920e9e272d9a88de7c22f36f1d0b769a87520cf9673e87279"} Feb 14 04:15:48 crc kubenswrapper[4867]: I0214 04:15:48.904015 4867 generic.go:334] "Generic (PLEG): container finished" podID="0ad7b333-6328-41ea-a81d-bce9790b185a" containerID="271deed38181d3d03a61bb60c701b3fc845d6907348df479c58ecd82b90d57ea" exitCode=0 Feb 14 04:15:48 crc kubenswrapper[4867]: I0214 04:15:48.904044 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" event={"ID":"0ad7b333-6328-41ea-a81d-bce9790b185a","Type":"ContainerDied","Data":"271deed38181d3d03a61bb60c701b3fc845d6907348df479c58ecd82b90d57ea"} Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.401031 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.444011 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-79479887dd-9ltbt"] Feb 14 04:15:50 crc kubenswrapper[4867]: E0214 04:15:50.444343 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ad7b333-6328-41ea-a81d-bce9790b185a" containerName="oauth-openshift" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.444374 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ad7b333-6328-41ea-a81d-bce9790b185a" containerName="oauth-openshift" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.444561 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ad7b333-6328-41ea-a81d-bce9790b185a" containerName="oauth-openshift" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.445202 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.452083 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79479887dd-9ltbt"] Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.552873 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-trusted-ca-bundle\") pod \"0ad7b333-6328-41ea-a81d-bce9790b185a\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.552937 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-provider-selection\") pod \"0ad7b333-6328-41ea-a81d-bce9790b185a\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.552973 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-cliconfig\") pod \"0ad7b333-6328-41ea-a81d-bce9790b185a\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.552994 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-service-ca\") pod \"0ad7b333-6328-41ea-a81d-bce9790b185a\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.553010 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-serving-cert\") pod \"0ad7b333-6328-41ea-a81d-bce9790b185a\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.553042 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-error\") pod \"0ad7b333-6328-41ea-a81d-bce9790b185a\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.553099 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ad7b333-6328-41ea-a81d-bce9790b185a-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "0ad7b333-6328-41ea-a81d-bce9790b185a" (UID: "0ad7b333-6328-41ea-a81d-bce9790b185a"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.553502 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "0ad7b333-6328-41ea-a81d-bce9790b185a" (UID: "0ad7b333-6328-41ea-a81d-bce9790b185a"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.553531 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "0ad7b333-6328-41ea-a81d-bce9790b185a" (UID: "0ad7b333-6328-41ea-a81d-bce9790b185a"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.553067 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ad7b333-6328-41ea-a81d-bce9790b185a-audit-dir\") pod \"0ad7b333-6328-41ea-a81d-bce9790b185a\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.553603 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-ocp-branding-template\") pod \"0ad7b333-6328-41ea-a81d-bce9790b185a\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.553634 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf64k\" (UniqueName: \"kubernetes.io/projected/0ad7b333-6328-41ea-a81d-bce9790b185a-kube-api-access-tf64k\") pod \"0ad7b333-6328-41ea-a81d-bce9790b185a\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.553678 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-session\") pod \"0ad7b333-6328-41ea-a81d-bce9790b185a\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.553721 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-router-certs\") pod \"0ad7b333-6328-41ea-a81d-bce9790b185a\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.553749 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-login\") pod \"0ad7b333-6328-41ea-a81d-bce9790b185a\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.553787 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-idp-0-file-data\") pod \"0ad7b333-6328-41ea-a81d-bce9790b185a\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.553848 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-audit-policies\") pod \"0ad7b333-6328-41ea-a81d-bce9790b185a\" (UID: \"0ad7b333-6328-41ea-a81d-bce9790b185a\") " Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.554185 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "0ad7b333-6328-41ea-a81d-bce9790b185a" (UID: "0ad7b333-6328-41ea-a81d-bce9790b185a"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.554497 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.554546 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "0ad7b333-6328-41ea-a81d-bce9790b185a" (UID: "0ad7b333-6328-41ea-a81d-bce9790b185a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.554675 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-service-ca\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.554726 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/351f0f21-497e-4c3e-99cc-30baff4e6484-audit-policies\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.554747 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.554795 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.554846 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-session\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.554869 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-router-certs\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.554915 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/351f0f21-497e-4c3e-99cc-30baff4e6484-audit-dir\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.554940 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-user-template-error\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.554970 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.555079 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wfkz\" (UniqueName: \"kubernetes.io/projected/351f0f21-497e-4c3e-99cc-30baff4e6484-kube-api-access-7wfkz\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.555130 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.555167 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.555202 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-user-template-login\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.555266 4867 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.555278 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.555289 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.555299 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.555311 4867 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ad7b333-6328-41ea-a81d-bce9790b185a-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.558218 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "0ad7b333-6328-41ea-a81d-bce9790b185a" (UID: "0ad7b333-6328-41ea-a81d-bce9790b185a"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.558417 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "0ad7b333-6328-41ea-a81d-bce9790b185a" (UID: "0ad7b333-6328-41ea-a81d-bce9790b185a"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.558827 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "0ad7b333-6328-41ea-a81d-bce9790b185a" (UID: "0ad7b333-6328-41ea-a81d-bce9790b185a"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.559411 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "0ad7b333-6328-41ea-a81d-bce9790b185a" (UID: "0ad7b333-6328-41ea-a81d-bce9790b185a"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.559918 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "0ad7b333-6328-41ea-a81d-bce9790b185a" (UID: "0ad7b333-6328-41ea-a81d-bce9790b185a"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.560157 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "0ad7b333-6328-41ea-a81d-bce9790b185a" (UID: "0ad7b333-6328-41ea-a81d-bce9790b185a"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.560400 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "0ad7b333-6328-41ea-a81d-bce9790b185a" (UID: "0ad7b333-6328-41ea-a81d-bce9790b185a"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.560563 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "0ad7b333-6328-41ea-a81d-bce9790b185a" (UID: "0ad7b333-6328-41ea-a81d-bce9790b185a"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.562954 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ad7b333-6328-41ea-a81d-bce9790b185a-kube-api-access-tf64k" (OuterVolumeSpecName: "kube-api-access-tf64k") pod "0ad7b333-6328-41ea-a81d-bce9790b185a" (UID: "0ad7b333-6328-41ea-a81d-bce9790b185a"). InnerVolumeSpecName "kube-api-access-tf64k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.656689 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-service-ca\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.656742 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/351f0f21-497e-4c3e-99cc-30baff4e6484-audit-policies\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.656762 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.656786 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.656808 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-session\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.656825 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-router-certs\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.656848 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/351f0f21-497e-4c3e-99cc-30baff4e6484-audit-dir\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.656862 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-user-template-error\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.656883 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.656911 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wfkz\" (UniqueName: \"kubernetes.io/projected/351f0f21-497e-4c3e-99cc-30baff4e6484-kube-api-access-7wfkz\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.656940 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.656964 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.656994 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-user-template-login\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.657027 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.657084 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.657097 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.657108 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.657117 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tf64k\" (UniqueName: \"kubernetes.io/projected/0ad7b333-6328-41ea-a81d-bce9790b185a-kube-api-access-tf64k\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.657127 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.657136 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.657146 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.657156 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.657167 4867 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0ad7b333-6328-41ea-a81d-bce9790b185a-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.657291 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/351f0f21-497e-4c3e-99cc-30baff4e6484-audit-dir\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.658351 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/351f0f21-497e-4c3e-99cc-30baff4e6484-audit-policies\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.658607 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-service-ca\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.660962 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-cliconfig\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.661417 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-router-certs\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.661498 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-serving-cert\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.662095 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.663210 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.663407 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.663648 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-user-template-login\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.663729 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.663882 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-system-session\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.664427 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/351f0f21-497e-4c3e-99cc-30baff4e6484-v4-0-config-user-template-error\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.672937 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wfkz\" (UniqueName: \"kubernetes.io/projected/351f0f21-497e-4c3e-99cc-30baff4e6484-kube-api-access-7wfkz\") pod \"oauth-openshift-79479887dd-9ltbt\" (UID: \"351f0f21-497e-4c3e-99cc-30baff4e6484\") " pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.758072 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.920192 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" event={"ID":"0ad7b333-6328-41ea-a81d-bce9790b185a","Type":"ContainerDied","Data":"0005bb5ab795f3cb3316208372a9d4195e426c2a1f38a510bf0162032f954a9f"} Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.920548 4867 scope.go:117] "RemoveContainer" containerID="271deed38181d3d03a61bb60c701b3fc845d6907348df479c58ecd82b90d57ea" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.920284 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-c65kr" Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.973901 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c65kr"] Feb 14 04:15:50 crc kubenswrapper[4867]: I0214 04:15:50.983441 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-c65kr"] Feb 14 04:15:51 crc kubenswrapper[4867]: I0214 04:15:51.004343 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ad7b333-6328-41ea-a81d-bce9790b185a" path="/var/lib/kubelet/pods/0ad7b333-6328-41ea-a81d-bce9790b185a/volumes" Feb 14 04:15:51 crc kubenswrapper[4867]: I0214 04:15:51.223520 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-79479887dd-9ltbt"] Feb 14 04:15:51 crc kubenswrapper[4867]: I0214 04:15:51.926194 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" event={"ID":"351f0f21-497e-4c3e-99cc-30baff4e6484","Type":"ContainerStarted","Data":"563d4e57c17a704703d730e549779becfa05a0901ceefc0c24faf0d612500998"} Feb 14 04:15:51 crc kubenswrapper[4867]: I0214 04:15:51.926552 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" event={"ID":"351f0f21-497e-4c3e-99cc-30baff4e6484","Type":"ContainerStarted","Data":"ec56011e077735a51f9641794580e4ff556553e447a8038ff938b25782de9471"} Feb 14 04:15:51 crc kubenswrapper[4867]: I0214 04:15:51.926579 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:51 crc kubenswrapper[4867]: I0214 04:15:51.950580 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" podStartSLOduration=29.950562596 podStartE2EDuration="29.950562596s" podCreationTimestamp="2026-02-14 04:15:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:15:51.949141099 +0000 UTC m=+384.030078413" watchObservedRunningTime="2026-02-14 04:15:51.950562596 +0000 UTC m=+384.031499920" Feb 14 04:15:52 crc kubenswrapper[4867]: I0214 04:15:52.159157 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 04:15:52 crc kubenswrapper[4867]: I0214 04:15:52.937617 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" event={"ID":"b967a9e8-e5f1-4c92-889a-1dd6adf747fd","Type":"ContainerStarted","Data":"1771829f5105142e5fb1906dbc8e69f1496d47af4f931c40341a4509f9eb8537"} Feb 14 04:15:52 crc kubenswrapper[4867]: I0214 04:15:52.951649 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" podStartSLOduration=33.275061131 podStartE2EDuration="36.951630349s" podCreationTimestamp="2026-02-14 04:15:16 +0000 UTC" firstStartedPulling="2026-02-14 04:15:48.790026341 +0000 UTC m=+380.870963655" lastFinishedPulling="2026-02-14 04:15:52.466595559 +0000 UTC m=+384.547532873" observedRunningTime="2026-02-14 04:15:52.950292914 +0000 UTC m=+385.031230228" watchObservedRunningTime="2026-02-14 04:15:52.951630349 +0000 UTC m=+385.032567663" Feb 14 04:15:53 crc kubenswrapper[4867]: I0214 04:15:53.941850 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 04:15:53 crc kubenswrapper[4867]: I0214 04:15:53.945701 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.308340 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-g2d66"] Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.309411 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.311451 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.311616 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-wzkj2" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.313009 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.313738 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.330693 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-g2d66"] Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.347988 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-g2d66\" (UID: \"cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.348411 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79-metrics-client-ca\") pod \"prometheus-operator-db54df47d-g2d66\" (UID: \"cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.348560 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lphl8\" (UniqueName: \"kubernetes.io/projected/cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79-kube-api-access-lphl8\") pod \"prometheus-operator-db54df47d-g2d66\" (UID: \"cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.348618 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-g2d66\" (UID: \"cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.449457 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-g2d66\" (UID: \"cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.449536 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79-metrics-client-ca\") pod \"prometheus-operator-db54df47d-g2d66\" (UID: \"cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.449588 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lphl8\" (UniqueName: \"kubernetes.io/projected/cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79-kube-api-access-lphl8\") pod \"prometheus-operator-db54df47d-g2d66\" (UID: \"cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.449633 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-g2d66\" (UID: \"cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" Feb 14 04:15:54 crc kubenswrapper[4867]: E0214 04:15:54.450492 4867 secret.go:188] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 14 04:15:54 crc kubenswrapper[4867]: E0214 04:15:54.450588 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79-prometheus-operator-tls podName:cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79 nodeName:}" failed. No retries permitted until 2026-02-14 04:15:54.950567162 +0000 UTC m=+387.031504476 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79-prometheus-operator-tls") pod "prometheus-operator-db54df47d-g2d66" (UID: "cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79") : secret "prometheus-operator-tls" not found Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.453735 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79-metrics-client-ca\") pod \"prometheus-operator-db54df47d-g2d66\" (UID: \"cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.473793 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-g2d66\" (UID: \"cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.482143 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lphl8\" (UniqueName: \"kubernetes.io/projected/cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79-kube-api-access-lphl8\") pod \"prometheus-operator-db54df47d-g2d66\" (UID: \"cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.957792 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-g2d66\" (UID: \"cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" Feb 14 04:15:54 crc kubenswrapper[4867]: I0214 04:15:54.962296 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-g2d66\" (UID: \"cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79\") " pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" Feb 14 04:15:55 crc kubenswrapper[4867]: I0214 04:15:55.246065 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" Feb 14 04:15:55 crc kubenswrapper[4867]: I0214 04:15:55.714148 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-g2d66"] Feb 14 04:15:55 crc kubenswrapper[4867]: I0214 04:15:55.954297 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" event={"ID":"cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79","Type":"ContainerStarted","Data":"7db2257006a1dce4c327dec7939024ac5808b3eee8119129b1c1a67673793112"} Feb 14 04:15:57 crc kubenswrapper[4867]: I0214 04:15:57.077924 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-64cd899fff-wknv7"] Feb 14 04:15:57 crc kubenswrapper[4867]: I0214 04:15:57.078557 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" podUID="8b0aebec-9a44-4db3-9bcb-e63c5f1748c8" containerName="controller-manager" containerID="cri-o://f048a7044e6a5e0c4f276f047bccbb72c43ee0e536c6a2c0efeb288de5790980" gracePeriod=30 Feb 14 04:15:57 crc kubenswrapper[4867]: I0214 04:15:57.179747 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27"] Feb 14 04:15:57 crc kubenswrapper[4867]: I0214 04:15:57.179946 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" podUID="e06bd216-b4b8-4754-a364-76f41991e155" containerName="route-controller-manager" containerID="cri-o://a4e095f624f44728d2a3fc2a1dc0256cd3539ee32a625d8e202f8d80d7e3e7de" gracePeriod=30 Feb 14 04:15:57 crc kubenswrapper[4867]: I0214 04:15:57.974695 4867 generic.go:334] "Generic (PLEG): container finished" podID="8b0aebec-9a44-4db3-9bcb-e63c5f1748c8" containerID="f048a7044e6a5e0c4f276f047bccbb72c43ee0e536c6a2c0efeb288de5790980" exitCode=0 Feb 14 04:15:57 crc kubenswrapper[4867]: I0214 04:15:57.974768 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" event={"ID":"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8","Type":"ContainerDied","Data":"f048a7044e6a5e0c4f276f047bccbb72c43ee0e536c6a2c0efeb288de5790980"} Feb 14 04:15:57 crc kubenswrapper[4867]: I0214 04:15:57.981345 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" event={"ID":"cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79","Type":"ContainerStarted","Data":"de1c27492cf2ee3b7e71306ec0493f4eb050389488e398e112decb528537a85d"} Feb 14 04:15:57 crc kubenswrapper[4867]: I0214 04:15:57.983384 4867 generic.go:334] "Generic (PLEG): container finished" podID="e06bd216-b4b8-4754-a364-76f41991e155" containerID="a4e095f624f44728d2a3fc2a1dc0256cd3539ee32a625d8e202f8d80d7e3e7de" exitCode=0 Feb 14 04:15:57 crc kubenswrapper[4867]: I0214 04:15:57.983479 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" event={"ID":"e06bd216-b4b8-4754-a364-76f41991e155","Type":"ContainerDied","Data":"a4e095f624f44728d2a3fc2a1dc0256cd3539ee32a625d8e202f8d80d7e3e7de"} Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.084964 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.099515 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e06bd216-b4b8-4754-a364-76f41991e155-config\") pod \"e06bd216-b4b8-4754-a364-76f41991e155\" (UID: \"e06bd216-b4b8-4754-a364-76f41991e155\") " Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.099581 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e06bd216-b4b8-4754-a364-76f41991e155-client-ca\") pod \"e06bd216-b4b8-4754-a364-76f41991e155\" (UID: \"e06bd216-b4b8-4754-a364-76f41991e155\") " Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.099638 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e06bd216-b4b8-4754-a364-76f41991e155-serving-cert\") pod \"e06bd216-b4b8-4754-a364-76f41991e155\" (UID: \"e06bd216-b4b8-4754-a364-76f41991e155\") " Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.099673 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gp5z6\" (UniqueName: \"kubernetes.io/projected/e06bd216-b4b8-4754-a364-76f41991e155-kube-api-access-gp5z6\") pod \"e06bd216-b4b8-4754-a364-76f41991e155\" (UID: \"e06bd216-b4b8-4754-a364-76f41991e155\") " Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.100477 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e06bd216-b4b8-4754-a364-76f41991e155-config" (OuterVolumeSpecName: "config") pod "e06bd216-b4b8-4754-a364-76f41991e155" (UID: "e06bd216-b4b8-4754-a364-76f41991e155"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.100795 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e06bd216-b4b8-4754-a364-76f41991e155-client-ca" (OuterVolumeSpecName: "client-ca") pod "e06bd216-b4b8-4754-a364-76f41991e155" (UID: "e06bd216-b4b8-4754-a364-76f41991e155"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.114481 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e06bd216-b4b8-4754-a364-76f41991e155-kube-api-access-gp5z6" (OuterVolumeSpecName: "kube-api-access-gp5z6") pod "e06bd216-b4b8-4754-a364-76f41991e155" (UID: "e06bd216-b4b8-4754-a364-76f41991e155"). InnerVolumeSpecName "kube-api-access-gp5z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.114550 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e06bd216-b4b8-4754-a364-76f41991e155-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e06bd216-b4b8-4754-a364-76f41991e155" (UID: "e06bd216-b4b8-4754-a364-76f41991e155"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.200422 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e06bd216-b4b8-4754-a364-76f41991e155-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.200463 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gp5z6\" (UniqueName: \"kubernetes.io/projected/e06bd216-b4b8-4754-a364-76f41991e155-kube-api-access-gp5z6\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.200476 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e06bd216-b4b8-4754-a364-76f41991e155-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.200486 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e06bd216-b4b8-4754-a364-76f41991e155-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.242334 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.402356 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-client-ca\") pod \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.402434 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4ck6\" (UniqueName: \"kubernetes.io/projected/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-kube-api-access-x4ck6\") pod \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.402492 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-proxy-ca-bundles\") pod \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.402542 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-serving-cert\") pod \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.402574 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-config\") pod \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\" (UID: \"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8\") " Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.403427 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8b0aebec-9a44-4db3-9bcb-e63c5f1748c8" (UID: "8b0aebec-9a44-4db3-9bcb-e63c5f1748c8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.403468 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-config" (OuterVolumeSpecName: "config") pod "8b0aebec-9a44-4db3-9bcb-e63c5f1748c8" (UID: "8b0aebec-9a44-4db3-9bcb-e63c5f1748c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.403558 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-client-ca" (OuterVolumeSpecName: "client-ca") pod "8b0aebec-9a44-4db3-9bcb-e63c5f1748c8" (UID: "8b0aebec-9a44-4db3-9bcb-e63c5f1748c8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.405773 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-kube-api-access-x4ck6" (OuterVolumeSpecName: "kube-api-access-x4ck6") pod "8b0aebec-9a44-4db3-9bcb-e63c5f1748c8" (UID: "8b0aebec-9a44-4db3-9bcb-e63c5f1748c8"). InnerVolumeSpecName "kube-api-access-x4ck6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.405898 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8b0aebec-9a44-4db3-9bcb-e63c5f1748c8" (UID: "8b0aebec-9a44-4db3-9bcb-e63c5f1748c8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.503477 4867 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.503524 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.503536 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.503544 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.503554 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4ck6\" (UniqueName: \"kubernetes.io/projected/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8-kube-api-access-x4ck6\") on node \"crc\" DevicePath \"\"" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.968665 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz"] Feb 14 04:15:58 crc kubenswrapper[4867]: E0214 04:15:58.968920 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e06bd216-b4b8-4754-a364-76f41991e155" containerName="route-controller-manager" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.968940 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e06bd216-b4b8-4754-a364-76f41991e155" containerName="route-controller-manager" Feb 14 04:15:58 crc kubenswrapper[4867]: E0214 04:15:58.968961 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b0aebec-9a44-4db3-9bcb-e63c5f1748c8" containerName="controller-manager" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.968970 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b0aebec-9a44-4db3-9bcb-e63c5f1748c8" containerName="controller-manager" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.969099 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b0aebec-9a44-4db3-9bcb-e63c5f1748c8" containerName="controller-manager" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.969119 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e06bd216-b4b8-4754-a364-76f41991e155" containerName="route-controller-manager" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.969804 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.991276 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" event={"ID":"e06bd216-b4b8-4754-a364-76f41991e155","Type":"ContainerDied","Data":"2fd954bb3352333915558ae50d85ba987c86894ea554a0ffcd81defc26a5063c"} Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.991383 4867 scope.go:117] "RemoveContainer" containerID="a4e095f624f44728d2a3fc2a1dc0256cd3539ee32a625d8e202f8d80d7e3e7de" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.991618 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27" Feb 14 04:15:58 crc kubenswrapper[4867]: I0214 04:15:58.993803 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz"] Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.005835 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.018430 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64cd899fff-wknv7" event={"ID":"8b0aebec-9a44-4db3-9bcb-e63c5f1748c8","Type":"ContainerDied","Data":"9b0735e90ad616b3867ce1cf48d98caaf6b478aa90e26b262b52fd8cb6e1c8ca"} Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.019036 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" event={"ID":"cfdf6bd8-5b7c-47eb-8763-9bf734d6cc79","Type":"ContainerStarted","Data":"c9e0ebbec040bfcfa0018745f36e8ec5e28793607a413016790f0c0f786c4220"} Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.023530 4867 scope.go:117] "RemoveContainer" containerID="f048a7044e6a5e0c4f276f047bccbb72c43ee0e536c6a2c0efeb288de5790980" Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.046054 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-64cd899fff-wknv7"] Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.049499 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-64cd899fff-wknv7"] Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.060008 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27"] Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.060104 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-856f7b9d6f-8fm27"] Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.067754 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-g2d66" podStartSLOduration=3.136401999 podStartE2EDuration="5.067732997s" podCreationTimestamp="2026-02-14 04:15:54 +0000 UTC" firstStartedPulling="2026-02-14 04:15:55.725526838 +0000 UTC m=+387.806464152" lastFinishedPulling="2026-02-14 04:15:57.656857826 +0000 UTC m=+389.737795150" observedRunningTime="2026-02-14 04:15:59.064384401 +0000 UTC m=+391.145321715" watchObservedRunningTime="2026-02-14 04:15:59.067732997 +0000 UTC m=+391.148670311" Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.109942 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96b49908-c23d-45d6-b7fa-3d718d01ee00-client-ca\") pod \"route-controller-manager-658bcc664-kwbrz\" (UID: \"96b49908-c23d-45d6-b7fa-3d718d01ee00\") " pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.110051 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96b49908-c23d-45d6-b7fa-3d718d01ee00-serving-cert\") pod \"route-controller-manager-658bcc664-kwbrz\" (UID: \"96b49908-c23d-45d6-b7fa-3d718d01ee00\") " pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.110146 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b49908-c23d-45d6-b7fa-3d718d01ee00-config\") pod \"route-controller-manager-658bcc664-kwbrz\" (UID: \"96b49908-c23d-45d6-b7fa-3d718d01ee00\") " pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.110221 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgmdg\" (UniqueName: \"kubernetes.io/projected/96b49908-c23d-45d6-b7fa-3d718d01ee00-kube-api-access-rgmdg\") pod \"route-controller-manager-658bcc664-kwbrz\" (UID: \"96b49908-c23d-45d6-b7fa-3d718d01ee00\") " pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.211106 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgmdg\" (UniqueName: \"kubernetes.io/projected/96b49908-c23d-45d6-b7fa-3d718d01ee00-kube-api-access-rgmdg\") pod \"route-controller-manager-658bcc664-kwbrz\" (UID: \"96b49908-c23d-45d6-b7fa-3d718d01ee00\") " pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.211588 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96b49908-c23d-45d6-b7fa-3d718d01ee00-client-ca\") pod \"route-controller-manager-658bcc664-kwbrz\" (UID: \"96b49908-c23d-45d6-b7fa-3d718d01ee00\") " pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.211694 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96b49908-c23d-45d6-b7fa-3d718d01ee00-serving-cert\") pod \"route-controller-manager-658bcc664-kwbrz\" (UID: \"96b49908-c23d-45d6-b7fa-3d718d01ee00\") " pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.211761 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b49908-c23d-45d6-b7fa-3d718d01ee00-config\") pod \"route-controller-manager-658bcc664-kwbrz\" (UID: \"96b49908-c23d-45d6-b7fa-3d718d01ee00\") " pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.212935 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96b49908-c23d-45d6-b7fa-3d718d01ee00-client-ca\") pod \"route-controller-manager-658bcc664-kwbrz\" (UID: \"96b49908-c23d-45d6-b7fa-3d718d01ee00\") " pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.213375 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b49908-c23d-45d6-b7fa-3d718d01ee00-config\") pod \"route-controller-manager-658bcc664-kwbrz\" (UID: \"96b49908-c23d-45d6-b7fa-3d718d01ee00\") " pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.216677 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96b49908-c23d-45d6-b7fa-3d718d01ee00-serving-cert\") pod \"route-controller-manager-658bcc664-kwbrz\" (UID: \"96b49908-c23d-45d6-b7fa-3d718d01ee00\") " pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.230054 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgmdg\" (UniqueName: \"kubernetes.io/projected/96b49908-c23d-45d6-b7fa-3d718d01ee00-kube-api-access-rgmdg\") pod \"route-controller-manager-658bcc664-kwbrz\" (UID: \"96b49908-c23d-45d6-b7fa-3d718d01ee00\") " pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.289463 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:15:59 crc kubenswrapper[4867]: I0214 04:15:59.673561 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz"] Feb 14 04:15:59 crc kubenswrapper[4867]: W0214 04:15:59.678724 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96b49908_c23d_45d6_b7fa_3d718d01ee00.slice/crio-36ca2d37b0192cdee33dc6fe36ba136f75d321a0564771f7e8b3c2c82c2a9e3c WatchSource:0}: Error finding container 36ca2d37b0192cdee33dc6fe36ba136f75d321a0564771f7e8b3c2c82c2a9e3c: Status 404 returned error can't find the container with id 36ca2d37b0192cdee33dc6fe36ba136f75d321a0564771f7e8b3c2c82c2a9e3c Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.027643 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" event={"ID":"96b49908-c23d-45d6-b7fa-3d718d01ee00","Type":"ContainerStarted","Data":"6b1dcdc8ab4882eb0ae66f99651a492e0075228f8a659714df05c3f830d62ae6"} Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.027720 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" event={"ID":"96b49908-c23d-45d6-b7fa-3d718d01ee00","Type":"ContainerStarted","Data":"36ca2d37b0192cdee33dc6fe36ba136f75d321a0564771f7e8b3c2c82c2a9e3c"} Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.027963 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.046124 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" podStartSLOduration=3.046106935 podStartE2EDuration="3.046106935s" podCreationTimestamp="2026-02-14 04:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:16:00.044889493 +0000 UTC m=+392.125826797" watchObservedRunningTime="2026-02-14 04:16:00.046106935 +0000 UTC m=+392.127044249" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.455040 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.686942 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj"] Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.688030 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" Feb 14 04:16:00 crc kubenswrapper[4867]: W0214 04:16:00.698808 4867 reflector.go:561] object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config": failed to list *v1.Secret: secrets "openshift-state-metrics-kube-rbac-proxy-config" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-monitoring": no relationship found between node 'crc' and this object Feb 14 04:16:00 crc kubenswrapper[4867]: E0214 04:16:00.698861 4867 reflector.go:158] "Unhandled Error" err="object-\"openshift-monitoring\"/\"openshift-state-metrics-kube-rbac-proxy-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-state-metrics-kube-rbac-proxy-config\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-monitoring\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.703277 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.712476 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj"] Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.726047 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh"] Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.727453 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.730046 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.730189 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.730664 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.749580 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh"] Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.752909 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-r85dv"] Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.754375 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.756271 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.758174 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.844332 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8a489956-9dfa-4e5f-ba64-03e262f9ef85-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-4v7sj\" (UID: \"8a489956-9dfa-4e5f-ba64-03e262f9ef85\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.844381 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/abb7e15d-7a93-4f87-a926-78eb1ead3680-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.844406 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8a489956-9dfa-4e5f-ba64-03e262f9ef85-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-4v7sj\" (UID: \"8a489956-9dfa-4e5f-ba64-03e262f9ef85\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.844422 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/abb7e15d-7a93-4f87-a926-78eb1ead3680-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.844453 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/abb7e15d-7a93-4f87-a926-78eb1ead3680-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.844517 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/abb7e15d-7a93-4f87-a926-78eb1ead3680-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.844867 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d56jc\" (UniqueName: \"kubernetes.io/projected/8a489956-9dfa-4e5f-ba64-03e262f9ef85-kube-api-access-d56jc\") pod \"openshift-state-metrics-566fddb674-4v7sj\" (UID: \"8a489956-9dfa-4e5f-ba64-03e262f9ef85\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.845021 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zrmq\" (UniqueName: \"kubernetes.io/projected/abb7e15d-7a93-4f87-a926-78eb1ead3680-kube-api-access-4zrmq\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.845138 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/abb7e15d-7a93-4f87-a926-78eb1ead3680-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.845173 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8a489956-9dfa-4e5f-ba64-03e262f9ef85-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-4v7sj\" (UID: \"8a489956-9dfa-4e5f-ba64-03e262f9ef85\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.946936 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8a489956-9dfa-4e5f-ba64-03e262f9ef85-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-4v7sj\" (UID: \"8a489956-9dfa-4e5f-ba64-03e262f9ef85\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.947008 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/abb7e15d-7a93-4f87-a926-78eb1ead3680-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.947052 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7d066eda-8f33-492d-bf5c-fb6eefed1ced-sys\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.947088 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7d066eda-8f33-492d-bf5c-fb6eefed1ced-metrics-client-ca\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.947208 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7d066eda-8f33-492d-bf5c-fb6eefed1ced-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.947275 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/abb7e15d-7a93-4f87-a926-78eb1ead3680-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.948724 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/abb7e15d-7a93-4f87-a926-78eb1ead3680-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.947307 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvpjp\" (UniqueName: \"kubernetes.io/projected/7d066eda-8f33-492d-bf5c-fb6eefed1ced-kube-api-access-nvpjp\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.948846 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/7d066eda-8f33-492d-bf5c-fb6eefed1ced-root\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.948965 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/7d066eda-8f33-492d-bf5c-fb6eefed1ced-node-exporter-tls\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.949002 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/7d066eda-8f33-492d-bf5c-fb6eefed1ced-node-exporter-textfile\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.949067 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/abb7e15d-7a93-4f87-a926-78eb1ead3680-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.949155 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/7d066eda-8f33-492d-bf5c-fb6eefed1ced-node-exporter-wtmp\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.949283 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d56jc\" (UniqueName: \"kubernetes.io/projected/8a489956-9dfa-4e5f-ba64-03e262f9ef85-kube-api-access-d56jc\") pod \"openshift-state-metrics-566fddb674-4v7sj\" (UID: \"8a489956-9dfa-4e5f-ba64-03e262f9ef85\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.949448 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/abb7e15d-7a93-4f87-a926-78eb1ead3680-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.949872 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zrmq\" (UniqueName: \"kubernetes.io/projected/abb7e15d-7a93-4f87-a926-78eb1ead3680-kube-api-access-4zrmq\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.950200 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/abb7e15d-7a93-4f87-a926-78eb1ead3680-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.950759 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8a489956-9dfa-4e5f-ba64-03e262f9ef85-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-4v7sj\" (UID: \"8a489956-9dfa-4e5f-ba64-03e262f9ef85\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.950813 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8a489956-9dfa-4e5f-ba64-03e262f9ef85-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-4v7sj\" (UID: \"8a489956-9dfa-4e5f-ba64-03e262f9ef85\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.950851 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/abb7e15d-7a93-4f87-a926-78eb1ead3680-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.951733 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8a489956-9dfa-4e5f-ba64-03e262f9ef85-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-4v7sj\" (UID: \"8a489956-9dfa-4e5f-ba64-03e262f9ef85\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.951814 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/abb7e15d-7a93-4f87-a926-78eb1ead3680-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.953883 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/abb7e15d-7a93-4f87-a926-78eb1ead3680-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.953909 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/abb7e15d-7a93-4f87-a926-78eb1ead3680-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.954708 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/8a489956-9dfa-4e5f-ba64-03e262f9ef85-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-4v7sj\" (UID: \"8a489956-9dfa-4e5f-ba64-03e262f9ef85\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.969071 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zrmq\" (UniqueName: \"kubernetes.io/projected/abb7e15d-7a93-4f87-a926-78eb1ead3680-kube-api-access-4zrmq\") pod \"kube-state-metrics-777cb5bd5d-s5thh\" (UID: \"abb7e15d-7a93-4f87-a926-78eb1ead3680\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.970002 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d56jc\" (UniqueName: \"kubernetes.io/projected/8a489956-9dfa-4e5f-ba64-03e262f9ef85-kube-api-access-d56jc\") pod \"openshift-state-metrics-566fddb674-4v7sj\" (UID: \"8a489956-9dfa-4e5f-ba64-03e262f9ef85\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.976885 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-76866bf749-9m2w5"] Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.985750 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.987556 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.988176 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.988344 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.988468 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.990301 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76866bf749-9m2w5"] Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.991215 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 14 04:16:00 crc kubenswrapper[4867]: I0214 04:16:00.991894 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.003361 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.016525 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b0aebec-9a44-4db3-9bcb-e63c5f1748c8" path="/var/lib/kubelet/pods/8b0aebec-9a44-4db3-9bcb-e63c5f1748c8/volumes" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.017437 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e06bd216-b4b8-4754-a364-76f41991e155" path="/var/lib/kubelet/pods/e06bd216-b4b8-4754-a364-76f41991e155/volumes" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.046037 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.051686 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shl4z\" (UniqueName: \"kubernetes.io/projected/8708b876-3ece-4820-b4f1-35d9fb2a195c-kube-api-access-shl4z\") pod \"controller-manager-76866bf749-9m2w5\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.051748 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7d066eda-8f33-492d-bf5c-fb6eefed1ced-sys\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.051772 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7d066eda-8f33-492d-bf5c-fb6eefed1ced-metrics-client-ca\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.051800 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-proxy-ca-bundles\") pod \"controller-manager-76866bf749-9m2w5\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.051820 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7d066eda-8f33-492d-bf5c-fb6eefed1ced-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.051837 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvpjp\" (UniqueName: \"kubernetes.io/projected/7d066eda-8f33-492d-bf5c-fb6eefed1ced-kube-api-access-nvpjp\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.051853 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/7d066eda-8f33-492d-bf5c-fb6eefed1ced-root\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.051889 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/7d066eda-8f33-492d-bf5c-fb6eefed1ced-node-exporter-tls\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.051907 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/7d066eda-8f33-492d-bf5c-fb6eefed1ced-node-exporter-textfile\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.051929 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-client-ca\") pod \"controller-manager-76866bf749-9m2w5\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.051948 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/7d066eda-8f33-492d-bf5c-fb6eefed1ced-node-exporter-wtmp\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.051968 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-config\") pod \"controller-manager-76866bf749-9m2w5\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.051997 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8708b876-3ece-4820-b4f1-35d9fb2a195c-serving-cert\") pod \"controller-manager-76866bf749-9m2w5\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.052293 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7d066eda-8f33-492d-bf5c-fb6eefed1ced-sys\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.052712 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/7d066eda-8f33-492d-bf5c-fb6eefed1ced-root\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.052999 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7d066eda-8f33-492d-bf5c-fb6eefed1ced-metrics-client-ca\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.054657 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/7d066eda-8f33-492d-bf5c-fb6eefed1ced-node-exporter-wtmp\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.054749 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/7d066eda-8f33-492d-bf5c-fb6eefed1ced-node-exporter-textfile\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.058641 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7d066eda-8f33-492d-bf5c-fb6eefed1ced-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.062763 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/7d066eda-8f33-492d-bf5c-fb6eefed1ced-node-exporter-tls\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.085851 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvpjp\" (UniqueName: \"kubernetes.io/projected/7d066eda-8f33-492d-bf5c-fb6eefed1ced-kube-api-access-nvpjp\") pod \"node-exporter-r85dv\" (UID: \"7d066eda-8f33-492d-bf5c-fb6eefed1ced\") " pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.153446 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-client-ca\") pod \"controller-manager-76866bf749-9m2w5\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.153804 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-config\") pod \"controller-manager-76866bf749-9m2w5\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.153827 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8708b876-3ece-4820-b4f1-35d9fb2a195c-serving-cert\") pod \"controller-manager-76866bf749-9m2w5\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.153874 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shl4z\" (UniqueName: \"kubernetes.io/projected/8708b876-3ece-4820-b4f1-35d9fb2a195c-kube-api-access-shl4z\") pod \"controller-manager-76866bf749-9m2w5\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.153921 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-proxy-ca-bundles\") pod \"controller-manager-76866bf749-9m2w5\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.155439 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-proxy-ca-bundles\") pod \"controller-manager-76866bf749-9m2w5\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.155992 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-client-ca\") pod \"controller-manager-76866bf749-9m2w5\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.157051 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-config\") pod \"controller-manager-76866bf749-9m2w5\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.159243 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8708b876-3ece-4820-b4f1-35d9fb2a195c-serving-cert\") pod \"controller-manager-76866bf749-9m2w5\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.176604 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shl4z\" (UniqueName: \"kubernetes.io/projected/8708b876-3ece-4820-b4f1-35d9fb2a195c-kube-api-access-shl4z\") pod \"controller-manager-76866bf749-9m2w5\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.251320 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.251371 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.337763 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.373762 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-r85dv" Feb 14 04:16:01 crc kubenswrapper[4867]: W0214 04:16:01.391299 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d066eda_8f33_492d_bf5c_fb6eefed1ced.slice/crio-10e6369faf39f04e446fd37c20e679a23aab7d1633ac0e4a0794215fd833d56a WatchSource:0}: Error finding container 10e6369faf39f04e446fd37c20e679a23aab7d1633ac0e4a0794215fd833d56a: Status 404 returned error can't find the container with id 10e6369faf39f04e446fd37c20e679a23aab7d1633ac0e4a0794215fd833d56a Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.515156 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh"] Feb 14 04:16:01 crc kubenswrapper[4867]: W0214 04:16:01.522694 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabb7e15d_7a93_4f87_a926_78eb1ead3680.slice/crio-fbefbaa847ab8fbcc5371eef8be097c800f79c2b0df7ccaf18efab82b01fe16e WatchSource:0}: Error finding container fbefbaa847ab8fbcc5371eef8be097c800f79c2b0df7ccaf18efab82b01fe16e: Status 404 returned error can't find the container with id fbefbaa847ab8fbcc5371eef8be097c800f79c2b0df7ccaf18efab82b01fe16e Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.632528 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.643782 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8a489956-9dfa-4e5f-ba64-03e262f9ef85-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-4v7sj\" (UID: \"8a489956-9dfa-4e5f-ba64-03e262f9ef85\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.845748 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-76866bf749-9m2w5"] Feb 14 04:16:01 crc kubenswrapper[4867]: W0214 04:16:01.852337 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8708b876_3ece_4820_b4f1_35d9fb2a195c.slice/crio-d45331f7f516f685e06d725fb6651d41df87d69b6bbe0b5ca1d4db8536a8773c WatchSource:0}: Error finding container d45331f7f516f685e06d725fb6651d41df87d69b6bbe0b5ca1d4db8536a8773c: Status 404 returned error can't find the container with id d45331f7f516f685e06d725fb6651d41df87d69b6bbe0b5ca1d4db8536a8773c Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.866780 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.868591 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.872554 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.872818 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.872961 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.873151 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.873728 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.880580 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.882445 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.884775 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.885780 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.906318 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.967413 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.967468 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.967521 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.967540 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c5a5db44-6c30-46cf-a796-64a6e898d1d8-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.967576 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5a5db44-6c30-46cf-a796-64a6e898d1d8-config-out\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.967606 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5a5db44-6c30-46cf-a796-64a6e898d1d8-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.967626 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.967645 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-config-volume\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.967661 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt5jp\" (UniqueName: \"kubernetes.io/projected/c5a5db44-6c30-46cf-a796-64a6e898d1d8-kube-api-access-zt5jp\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.967803 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-web-config\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.967821 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5a5db44-6c30-46cf-a796-64a6e898d1d8-tls-assets\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:01 crc kubenswrapper[4867]: I0214 04:16:01.967837 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/c5a5db44-6c30-46cf-a796-64a6e898d1d8-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.066457 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-r85dv" event={"ID":"7d066eda-8f33-492d-bf5c-fb6eefed1ced","Type":"ContainerStarted","Data":"10e6369faf39f04e446fd37c20e679a23aab7d1633ac0e4a0794215fd833d56a"} Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.070814 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.070862 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c5a5db44-6c30-46cf-a796-64a6e898d1d8-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.070914 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5a5db44-6c30-46cf-a796-64a6e898d1d8-config-out\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.070974 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5a5db44-6c30-46cf-a796-64a6e898d1d8-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.071005 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.071025 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-config-volume\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.071052 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt5jp\" (UniqueName: \"kubernetes.io/projected/c5a5db44-6c30-46cf-a796-64a6e898d1d8-kube-api-access-zt5jp\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.071083 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-web-config\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.071104 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5a5db44-6c30-46cf-a796-64a6e898d1d8-tls-assets\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.071129 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/c5a5db44-6c30-46cf-a796-64a6e898d1d8-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.071159 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.071192 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.078767 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/c5a5db44-6c30-46cf-a796-64a6e898d1d8-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.079177 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-config-volume\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.079639 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" event={"ID":"8708b876-3ece-4820-b4f1-35d9fb2a195c","Type":"ContainerStarted","Data":"d45331f7f516f685e06d725fb6651d41df87d69b6bbe0b5ca1d4db8536a8773c"} Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.080149 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c5a5db44-6c30-46cf-a796-64a6e898d1d8-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.080889 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5a5db44-6c30-46cf-a796-64a6e898d1d8-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.082640 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.083973 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.089678 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5a5db44-6c30-46cf-a796-64a6e898d1d8-config-out\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.092969 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" event={"ID":"abb7e15d-7a93-4f87-a926-78eb1ead3680","Type":"ContainerStarted","Data":"fbefbaa847ab8fbcc5371eef8be097c800f79c2b0df7ccaf18efab82b01fe16e"} Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.093282 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-web-config\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.093356 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5a5db44-6c30-46cf-a796-64a6e898d1d8-tls-assets\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.093843 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.093963 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/c5a5db44-6c30-46cf-a796-64a6e898d1d8-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.096750 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt5jp\" (UniqueName: \"kubernetes.io/projected/c5a5db44-6c30-46cf-a796-64a6e898d1d8-kube-api-access-zt5jp\") pod \"alertmanager-main-0\" (UID: \"c5a5db44-6c30-46cf-a796-64a6e898d1d8\") " pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.196159 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.386371 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj"] Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.723388 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.784483 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-85586fc579-b75c7"] Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.793751 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.797442 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-7kwwq" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.797668 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.797788 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.797893 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.798033 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.798149 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.803420 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-17jpo9sluqn12" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.832851 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-85586fc579-b75c7"] Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.892017 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.892100 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.892150 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.892192 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.892275 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/72801c86-0365-4e93-8887-4fdc6d8a9cad-metrics-client-ca\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.892309 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bslvv\" (UniqueName: \"kubernetes.io/projected/72801c86-0365-4e93-8887-4fdc6d8a9cad-kube-api-access-bslvv\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.892349 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-thanos-querier-tls\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.892412 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-grpc-tls\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.998838 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-grpc-tls\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.999220 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.999259 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.999282 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.999316 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.999363 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/72801c86-0365-4e93-8887-4fdc6d8a9cad-metrics-client-ca\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.999399 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bslvv\" (UniqueName: \"kubernetes.io/projected/72801c86-0365-4e93-8887-4fdc6d8a9cad-kube-api-access-bslvv\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:02 crc kubenswrapper[4867]: I0214 04:16:02.999433 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-thanos-querier-tls\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.000708 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/72801c86-0365-4e93-8887-4fdc6d8a9cad-metrics-client-ca\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.007661 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.008129 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.010464 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-thanos-querier-tls\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.015873 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.034720 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.040620 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bslvv\" (UniqueName: \"kubernetes.io/projected/72801c86-0365-4e93-8887-4fdc6d8a9cad-kube-api-access-bslvv\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.048904 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/72801c86-0365-4e93-8887-4fdc6d8a9cad-secret-grpc-tls\") pod \"thanos-querier-85586fc579-b75c7\" (UID: \"72801c86-0365-4e93-8887-4fdc6d8a9cad\") " pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.098246 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" event={"ID":"8a489956-9dfa-4e5f-ba64-03e262f9ef85","Type":"ContainerStarted","Data":"41226741f1ca63a0314854105ee1ce32c395e601a7879b00a61e3531e13e0e9a"} Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.098313 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" event={"ID":"8a489956-9dfa-4e5f-ba64-03e262f9ef85","Type":"ContainerStarted","Data":"b76398484dfa69747d3ec86f6c5324e37226daf8848e6352b3135a8d16581f21"} Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.098327 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" event={"ID":"8a489956-9dfa-4e5f-ba64-03e262f9ef85","Type":"ContainerStarted","Data":"3af0078730ce8ecd268ea6d91af18ec80365f8d9649e0bb2ac70611110bdd78b"} Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.100763 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" event={"ID":"8708b876-3ece-4820-b4f1-35d9fb2a195c","Type":"ContainerStarted","Data":"abaa323618b879bb61fc24afaa3f869dc0bc36bdaf9414230f2b473467c245b7"} Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.101075 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.103577 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c5a5db44-6c30-46cf-a796-64a6e898d1d8","Type":"ContainerStarted","Data":"9f022d0231f135a752d98219eee7840a83e14d9d801b81aee3ea93de570a6a0c"} Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.109350 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.135236 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:03 crc kubenswrapper[4867]: I0214 04:16:03.142214 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" podStartSLOduration=6.142199751 podStartE2EDuration="6.142199751s" podCreationTimestamp="2026-02-14 04:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:16:03.118564399 +0000 UTC m=+395.199501733" watchObservedRunningTime="2026-02-14 04:16:03.142199751 +0000 UTC m=+395.223137055" Feb 14 04:16:04 crc kubenswrapper[4867]: I0214 04:16:04.531209 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-85586fc579-b75c7"] Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.117990 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" event={"ID":"72801c86-0365-4e93-8887-4fdc6d8a9cad","Type":"ContainerStarted","Data":"3cae1f3da5324ad6a7765b39315d91d008076db773acf89cdfe16d10df3238f2"} Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.124391 4867 generic.go:334] "Generic (PLEG): container finished" podID="7d066eda-8f33-492d-bf5c-fb6eefed1ced" containerID="39494fc9f698469501e541fe48f10554b81437f5f3f35bd14d402b6e2cf1c3ca" exitCode=0 Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.125953 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-r85dv" event={"ID":"7d066eda-8f33-492d-bf5c-fb6eefed1ced","Type":"ContainerDied","Data":"39494fc9f698469501e541fe48f10554b81437f5f3f35bd14d402b6e2cf1c3ca"} Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.542637 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7fbfc7fbd4-76v9z"] Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.544837 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.604821 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7fbfc7fbd4-76v9z"] Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.666206 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsfch\" (UniqueName: \"kubernetes.io/projected/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-kube-api-access-wsfch\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.666288 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-service-ca\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.666309 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-oauth-serving-cert\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.666345 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-trusted-ca-bundle\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.666393 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-oauth-config\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.666435 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-serving-cert\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.666473 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-config\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.769083 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-trusted-ca-bundle\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.769652 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-oauth-config\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.769696 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-serving-cert\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.769766 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-config\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.769856 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsfch\" (UniqueName: \"kubernetes.io/projected/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-kube-api-access-wsfch\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.769897 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-service-ca\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.769924 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-oauth-serving-cert\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.770950 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-oauth-serving-cert\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.771227 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-trusted-ca-bundle\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.771836 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-config\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.772781 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-service-ca\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.779529 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-oauth-config\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.779602 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-serving-cert\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.795084 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsfch\" (UniqueName: \"kubernetes.io/projected/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-kube-api-access-wsfch\") pod \"console-7fbfc7fbd4-76v9z\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:05 crc kubenswrapper[4867]: I0214 04:16:05.956161 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.114722 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-76ddc659b-tzdtd"] Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.120793 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.124244 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.124765 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-mc7cq" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.124955 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.125050 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.125334 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-abg8865f8j0ji" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.125552 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.133169 4867 generic.go:334] "Generic (PLEG): container finished" podID="c5a5db44-6c30-46cf-a796-64a6e898d1d8" containerID="86fc1a6798da12a1789d84257cbccec7dccff2f126dc7b986ccb003e93a9c590" exitCode=0 Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.133231 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c5a5db44-6c30-46cf-a796-64a6e898d1d8","Type":"ContainerDied","Data":"86fc1a6798da12a1789d84257cbccec7dccff2f126dc7b986ccb003e93a9c590"} Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.138834 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-76ddc659b-tzdtd"] Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.148872 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" event={"ID":"abb7e15d-7a93-4f87-a926-78eb1ead3680","Type":"ContainerStarted","Data":"9591beb52dab2e4705bcde5b084f7050eaee63d3797fbf0fc8bfa9dbb6b8cd39"} Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.149281 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" event={"ID":"abb7e15d-7a93-4f87-a926-78eb1ead3680","Type":"ContainerStarted","Data":"d08ca02c1ff320120218e63dd9fb8d0b5e23c858da1415d59e2a0dedb0001612"} Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.149294 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" event={"ID":"abb7e15d-7a93-4f87-a926-78eb1ead3680","Type":"ContainerStarted","Data":"d34d202e077f56baa0981c4e3634f34875c0d7fbde4e24fa95e94a15f7803c4f"} Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.151523 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" event={"ID":"8a489956-9dfa-4e5f-ba64-03e262f9ef85","Type":"ContainerStarted","Data":"46fabc4fd91cd9c51b46059c87c082a0879831203a97e844e9a887eaceb509d3"} Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.156489 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-r85dv" event={"ID":"7d066eda-8f33-492d-bf5c-fb6eefed1ced","Type":"ContainerStarted","Data":"d37c84664af69c7327ec7303516ef0bbe2962e265fbee20c35b4e4962f3bdb92"} Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.156548 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-r85dv" event={"ID":"7d066eda-8f33-492d-bf5c-fb6eefed1ced","Type":"ContainerStarted","Data":"ac1aaafe0177a2d6a82c473c3bb33148114ea78577bbfe08cb129d9db744fb63"} Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.197147 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-4v7sj" podStartSLOduration=3.97261469 podStartE2EDuration="6.197114771s" podCreationTimestamp="2026-02-14 04:16:00 +0000 UTC" firstStartedPulling="2026-02-14 04:16:02.954352931 +0000 UTC m=+395.035290245" lastFinishedPulling="2026-02-14 04:16:05.178853012 +0000 UTC m=+397.259790326" observedRunningTime="2026-02-14 04:16:06.187489752 +0000 UTC m=+398.268427066" watchObservedRunningTime="2026-02-14 04:16:06.197114771 +0000 UTC m=+398.278052085" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.236485 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-r85dv" podStartSLOduration=3.509462531 podStartE2EDuration="6.236471899s" podCreationTimestamp="2026-02-14 04:16:00 +0000 UTC" firstStartedPulling="2026-02-14 04:16:01.393798745 +0000 UTC m=+393.474736059" lastFinishedPulling="2026-02-14 04:16:04.120808093 +0000 UTC m=+396.201745427" observedRunningTime="2026-02-14 04:16:06.235329859 +0000 UTC m=+398.316267173" watchObservedRunningTime="2026-02-14 04:16:06.236471899 +0000 UTC m=+398.317409213" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.239365 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-s5thh" podStartSLOduration=3.623661554 podStartE2EDuration="6.239353763s" podCreationTimestamp="2026-02-14 04:16:00 +0000 UTC" firstStartedPulling="2026-02-14 04:16:01.525534892 +0000 UTC m=+393.606472206" lastFinishedPulling="2026-02-14 04:16:04.141227101 +0000 UTC m=+396.222164415" observedRunningTime="2026-02-14 04:16:06.212874919 +0000 UTC m=+398.293812233" watchObservedRunningTime="2026-02-14 04:16:06.239353763 +0000 UTC m=+398.320291067" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.275475 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6p79\" (UniqueName: \"kubernetes.io/projected/652d53d9-a4c0-4061-b817-ca5173785521-kube-api-access-d6p79\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.276294 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/652d53d9-a4c0-4061-b817-ca5173785521-client-ca-bundle\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.276538 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/652d53d9-a4c0-4061-b817-ca5173785521-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.276625 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/652d53d9-a4c0-4061-b817-ca5173785521-secret-metrics-server-tls\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.276729 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/652d53d9-a4c0-4061-b817-ca5173785521-metrics-server-audit-profiles\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.276767 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/652d53d9-a4c0-4061-b817-ca5173785521-secret-metrics-client-certs\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.276829 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/652d53d9-a4c0-4061-b817-ca5173785521-audit-log\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.379628 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6p79\" (UniqueName: \"kubernetes.io/projected/652d53d9-a4c0-4061-b817-ca5173785521-kube-api-access-d6p79\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.379696 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/652d53d9-a4c0-4061-b817-ca5173785521-client-ca-bundle\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.379748 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/652d53d9-a4c0-4061-b817-ca5173785521-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.379781 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/652d53d9-a4c0-4061-b817-ca5173785521-secret-metrics-server-tls\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.379814 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/652d53d9-a4c0-4061-b817-ca5173785521-metrics-server-audit-profiles\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.379867 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/652d53d9-a4c0-4061-b817-ca5173785521-secret-metrics-client-certs\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.380048 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/652d53d9-a4c0-4061-b817-ca5173785521-audit-log\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.380677 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/652d53d9-a4c0-4061-b817-ca5173785521-audit-log\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.381788 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/652d53d9-a4c0-4061-b817-ca5173785521-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.381907 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/652d53d9-a4c0-4061-b817-ca5173785521-metrics-server-audit-profiles\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.387074 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/652d53d9-a4c0-4061-b817-ca5173785521-secret-metrics-server-tls\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.387294 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/652d53d9-a4c0-4061-b817-ca5173785521-client-ca-bundle\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.389573 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/652d53d9-a4c0-4061-b817-ca5173785521-secret-metrics-client-certs\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.397532 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6p79\" (UniqueName: \"kubernetes.io/projected/652d53d9-a4c0-4061-b817-ca5173785521-kube-api-access-d6p79\") pod \"metrics-server-76ddc659b-tzdtd\" (UID: \"652d53d9-a4c0-4061-b817-ca5173785521\") " pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.414837 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7fbfc7fbd4-76v9z"] Feb 14 04:16:06 crc kubenswrapper[4867]: W0214 04:16:06.421201 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf77d496b_c6fc_478c_9bf7_7ea59cb3a474.slice/crio-27f66f9acfe9eb8d98daf1aedc7604a2c13203017a16447c28475c04bbfd3cf7 WatchSource:0}: Error finding container 27f66f9acfe9eb8d98daf1aedc7604a2c13203017a16447c28475c04bbfd3cf7: Status 404 returned error can't find the container with id 27f66f9acfe9eb8d98daf1aedc7604a2c13203017a16447c28475c04bbfd3cf7 Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.448098 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.491160 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd"] Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.492907 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.496239 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.499302 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.499367 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd"] Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.592903 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/bcf2722f-8c1f-4061-8c4a-9888961c5361-monitoring-plugin-cert\") pod \"monitoring-plugin-7f5858d95d-fvlxd\" (UID: \"bcf2722f-8c1f-4061-8c4a-9888961c5361\") " pod="openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.694455 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/bcf2722f-8c1f-4061-8c4a-9888961c5361-monitoring-plugin-cert\") pod \"monitoring-plugin-7f5858d95d-fvlxd\" (UID: \"bcf2722f-8c1f-4061-8c4a-9888961c5361\") " pod="openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.701123 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/bcf2722f-8c1f-4061-8c4a-9888961c5361-monitoring-plugin-cert\") pod \"monitoring-plugin-7f5858d95d-fvlxd\" (UID: \"bcf2722f-8c1f-4061-8c4a-9888961c5361\") " pod="openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.829526 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd" Feb 14 04:16:06 crc kubenswrapper[4867]: I0214 04:16:06.895181 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-76ddc659b-tzdtd"] Feb 14 04:16:06 crc kubenswrapper[4867]: W0214 04:16:06.955441 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod652d53d9_a4c0_4061_b817_ca5173785521.slice/crio-8380ec1c893b73a66d9d682954baa50258140ac65258e730cb625793017a2292 WatchSource:0}: Error finding container 8380ec1c893b73a66d9d682954baa50258140ac65258e730cb625793017a2292: Status 404 returned error can't find the container with id 8380ec1c893b73a66d9d682954baa50258140ac65258e730cb625793017a2292 Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.117327 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.119359 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.129833 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.129874 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.129993 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.130091 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.130375 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.130485 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.131798 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.131811 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.131949 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-dmlbq" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.132165 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.132286 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-8oiq8eud6lg7c" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.132446 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.140652 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.141039 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.206473 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7fbfc7fbd4-76v9z" event={"ID":"f77d496b-c6fc-478c-9bf7-7ea59cb3a474","Type":"ContainerStarted","Data":"df535b9b85af4492848df019310db3541d82923651d0d5f2862f2b53665e91fd"} Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.206583 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7fbfc7fbd4-76v9z" event={"ID":"f77d496b-c6fc-478c-9bf7-7ea59cb3a474","Type":"ContainerStarted","Data":"27f66f9acfe9eb8d98daf1aedc7604a2c13203017a16447c28475c04bbfd3cf7"} Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.206850 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.206904 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.206925 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.206961 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/62ee3130-2952-453e-82b6-dba068ba1bc9-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.206982 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62ee3130-2952-453e-82b6-dba068ba1bc9-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.207000 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.207019 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-config\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.207042 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.207090 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62ee3130-2952-453e-82b6-dba068ba1bc9-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.207113 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.207147 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/62ee3130-2952-453e-82b6-dba068ba1bc9-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.207167 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/62ee3130-2952-453e-82b6-dba068ba1bc9-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.207186 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62ee3130-2952-453e-82b6-dba068ba1bc9-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.207202 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/62ee3130-2952-453e-82b6-dba068ba1bc9-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.207222 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwx2t\" (UniqueName: \"kubernetes.io/projected/62ee3130-2952-453e-82b6-dba068ba1bc9-kube-api-access-vwx2t\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.207246 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.207263 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-web-config\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.207284 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/62ee3130-2952-453e-82b6-dba068ba1bc9-config-out\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.208877 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" event={"ID":"652d53d9-a4c0-4061-b817-ca5173785521","Type":"ContainerStarted","Data":"8380ec1c893b73a66d9d682954baa50258140ac65258e730cb625793017a2292"} Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.223857 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7fbfc7fbd4-76v9z" podStartSLOduration=2.223836339 podStartE2EDuration="2.223836339s" podCreationTimestamp="2026-02-14 04:16:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:16:07.221870788 +0000 UTC m=+399.302808102" watchObservedRunningTime="2026-02-14 04:16:07.223836339 +0000 UTC m=+399.304773643" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.263986 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd"] Feb 14 04:16:07 crc kubenswrapper[4867]: W0214 04:16:07.272090 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcf2722f_8c1f_4061_8c4a_9888961c5361.slice/crio-5d31c340527198d5e84bc28ee692f930625e450c1c7b56ed8d327fcc0a767674 WatchSource:0}: Error finding container 5d31c340527198d5e84bc28ee692f930625e450c1c7b56ed8d327fcc0a767674: Status 404 returned error can't find the container with id 5d31c340527198d5e84bc28ee692f930625e450c1c7b56ed8d327fcc0a767674 Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.308215 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.308272 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.308293 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.308350 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/62ee3130-2952-453e-82b6-dba068ba1bc9-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.308391 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62ee3130-2952-453e-82b6-dba068ba1bc9-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.308423 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.308441 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-config\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.308460 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.308526 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62ee3130-2952-453e-82b6-dba068ba1bc9-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.308546 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.309271 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/62ee3130-2952-453e-82b6-dba068ba1bc9-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.309325 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/62ee3130-2952-453e-82b6-dba068ba1bc9-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.309361 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62ee3130-2952-453e-82b6-dba068ba1bc9-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.309378 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/62ee3130-2952-453e-82b6-dba068ba1bc9-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.309409 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwx2t\" (UniqueName: \"kubernetes.io/projected/62ee3130-2952-453e-82b6-dba068ba1bc9-kube-api-access-vwx2t\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.309458 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.309482 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-web-config\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.309499 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/62ee3130-2952-453e-82b6-dba068ba1bc9-config-out\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.311093 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62ee3130-2952-453e-82b6-dba068ba1bc9-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.311201 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62ee3130-2952-453e-82b6-dba068ba1bc9-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.311987 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/62ee3130-2952-453e-82b6-dba068ba1bc9-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.313352 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/62ee3130-2952-453e-82b6-dba068ba1bc9-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.317342 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/62ee3130-2952-453e-82b6-dba068ba1bc9-config-out\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.317543 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62ee3130-2952-453e-82b6-dba068ba1bc9-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.317651 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-config\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.318154 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/62ee3130-2952-453e-82b6-dba068ba1bc9-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.320332 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.320952 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-web-config\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.321725 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.322453 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.325843 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/62ee3130-2952-453e-82b6-dba068ba1bc9-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.328048 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.332263 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.332781 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.336217 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwx2t\" (UniqueName: \"kubernetes.io/projected/62ee3130-2952-453e-82b6-dba068ba1bc9-kube-api-access-vwx2t\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.340389 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/62ee3130-2952-453e-82b6-dba068ba1bc9-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"62ee3130-2952-453e-82b6-dba068ba1bc9\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:07 crc kubenswrapper[4867]: I0214 04:16:07.462543 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:08 crc kubenswrapper[4867]: I0214 04:16:08.057853 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" podUID="c029599e-5014-4874-917f-076635849451" containerName="registry" containerID="cri-o://984105ff3eb0991dfe28181ee193825f9011bc66c156c9de4b38deec4acb2517" gracePeriod=30 Feb 14 04:16:08 crc kubenswrapper[4867]: I0214 04:16:08.216539 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd" event={"ID":"bcf2722f-8c1f-4061-8c4a-9888961c5361","Type":"ContainerStarted","Data":"5d31c340527198d5e84bc28ee692f930625e450c1c7b56ed8d327fcc0a767674"} Feb 14 04:16:08 crc kubenswrapper[4867]: I0214 04:16:08.219196 4867 generic.go:334] "Generic (PLEG): container finished" podID="c029599e-5014-4874-917f-076635849451" containerID="984105ff3eb0991dfe28181ee193825f9011bc66c156c9de4b38deec4acb2517" exitCode=0 Feb 14 04:16:08 crc kubenswrapper[4867]: I0214 04:16:08.219250 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" event={"ID":"c029599e-5014-4874-917f-076635849451","Type":"ContainerDied","Data":"984105ff3eb0991dfe28181ee193825f9011bc66c156c9de4b38deec4acb2517"} Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.594679 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.762138 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"c029599e-5014-4874-917f-076635849451\" (UID: \"c029599e-5014-4874-917f-076635849451\") " Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.762593 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c029599e-5014-4874-917f-076635849451-ca-trust-extracted\") pod \"c029599e-5014-4874-917f-076635849451\" (UID: \"c029599e-5014-4874-917f-076635849451\") " Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.762622 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-registry-tls\") pod \"c029599e-5014-4874-917f-076635849451\" (UID: \"c029599e-5014-4874-917f-076635849451\") " Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.762654 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c029599e-5014-4874-917f-076635849451-trusted-ca\") pod \"c029599e-5014-4874-917f-076635849451\" (UID: \"c029599e-5014-4874-917f-076635849451\") " Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.762673 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-bound-sa-token\") pod \"c029599e-5014-4874-917f-076635849451\" (UID: \"c029599e-5014-4874-917f-076635849451\") " Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.762721 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c029599e-5014-4874-917f-076635849451-registry-certificates\") pod \"c029599e-5014-4874-917f-076635849451\" (UID: \"c029599e-5014-4874-917f-076635849451\") " Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.762752 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmbh6\" (UniqueName: \"kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-kube-api-access-bmbh6\") pod \"c029599e-5014-4874-917f-076635849451\" (UID: \"c029599e-5014-4874-917f-076635849451\") " Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.762769 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c029599e-5014-4874-917f-076635849451-installation-pull-secrets\") pod \"c029599e-5014-4874-917f-076635849451\" (UID: \"c029599e-5014-4874-917f-076635849451\") " Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.763979 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c029599e-5014-4874-917f-076635849451-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "c029599e-5014-4874-917f-076635849451" (UID: "c029599e-5014-4874-917f-076635849451"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.764093 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c029599e-5014-4874-917f-076635849451-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "c029599e-5014-4874-917f-076635849451" (UID: "c029599e-5014-4874-917f-076635849451"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.771358 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "c029599e-5014-4874-917f-076635849451" (UID: "c029599e-5014-4874-917f-076635849451"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.771587 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c029599e-5014-4874-917f-076635849451-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "c029599e-5014-4874-917f-076635849451" (UID: "c029599e-5014-4874-917f-076635849451"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.771806 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-kube-api-access-bmbh6" (OuterVolumeSpecName: "kube-api-access-bmbh6") pod "c029599e-5014-4874-917f-076635849451" (UID: "c029599e-5014-4874-917f-076635849451"). InnerVolumeSpecName "kube-api-access-bmbh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.783195 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "c029599e-5014-4874-917f-076635849451" (UID: "c029599e-5014-4874-917f-076635849451"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.783360 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "c029599e-5014-4874-917f-076635849451" (UID: "c029599e-5014-4874-917f-076635849451"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.784687 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c029599e-5014-4874-917f-076635849451-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "c029599e-5014-4874-917f-076635849451" (UID: "c029599e-5014-4874-917f-076635849451"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.864544 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c029599e-5014-4874-917f-076635849451-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.864576 4867 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.864587 4867 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c029599e-5014-4874-917f-076635849451-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.864596 4867 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c029599e-5014-4874-917f-076635849451-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.864605 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmbh6\" (UniqueName: \"kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-kube-api-access-bmbh6\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.864612 4867 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c029599e-5014-4874-917f-076635849451-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.864620 4867 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c029599e-5014-4874-917f-076635849451-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:09 crc kubenswrapper[4867]: I0214 04:16:09.954914 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 14 04:16:10 crc kubenswrapper[4867]: I0214 04:16:10.248054 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" event={"ID":"c029599e-5014-4874-917f-076635849451","Type":"ContainerDied","Data":"6ea0765f93238181496aa9ad98328dd359db53721f5f5fd14d5d2d61c6d3b39b"} Feb 14 04:16:10 crc kubenswrapper[4867]: I0214 04:16:10.248119 4867 scope.go:117] "RemoveContainer" containerID="984105ff3eb0991dfe28181ee193825f9011bc66c156c9de4b38deec4acb2517" Feb 14 04:16:10 crc kubenswrapper[4867]: I0214 04:16:10.248285 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" Feb 14 04:16:10 crc kubenswrapper[4867]: I0214 04:16:10.293304 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5rxcg"] Feb 14 04:16:10 crc kubenswrapper[4867]: I0214 04:16:10.297301 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-5rxcg"] Feb 14 04:16:11 crc kubenswrapper[4867]: I0214 04:16:11.005014 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c029599e-5014-4874-917f-076635849451" path="/var/lib/kubelet/pods/c029599e-5014-4874-917f-076635849451/volumes" Feb 14 04:16:11 crc kubenswrapper[4867]: I0214 04:16:11.257294 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" event={"ID":"72801c86-0365-4e93-8887-4fdc6d8a9cad","Type":"ContainerStarted","Data":"2feaa7ec3b997344380510cbb416c62fadf0bc72aa0c4b6730f60e6d52015870"} Feb 14 04:16:11 crc kubenswrapper[4867]: I0214 04:16:11.257339 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" event={"ID":"72801c86-0365-4e93-8887-4fdc6d8a9cad","Type":"ContainerStarted","Data":"d2159771377fc702371462aa9a14ef614a4f97f6537f88e8acad4e91910fe740"} Feb 14 04:16:11 crc kubenswrapper[4867]: I0214 04:16:11.257353 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" event={"ID":"72801c86-0365-4e93-8887-4fdc6d8a9cad","Type":"ContainerStarted","Data":"a4944956fbbc325cfc0cd1268c251f77eced79caeff535d2b1c8b141aeb39bc0"} Feb 14 04:16:11 crc kubenswrapper[4867]: I0214 04:16:11.260270 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd" event={"ID":"bcf2722f-8c1f-4061-8c4a-9888961c5361","Type":"ContainerStarted","Data":"9111a116940ebcb2258feb531f677548eeb63b1e51787d91375ec3b3726af5fa"} Feb 14 04:16:11 crc kubenswrapper[4867]: I0214 04:16:11.260711 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd" Feb 14 04:16:11 crc kubenswrapper[4867]: I0214 04:16:11.265995 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c5a5db44-6c30-46cf-a796-64a6e898d1d8","Type":"ContainerStarted","Data":"0571d9124b51b4ce87998f4b34d8cd3fdfc350358086d53c9ee26294983f688e"} Feb 14 04:16:11 crc kubenswrapper[4867]: I0214 04:16:11.266033 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c5a5db44-6c30-46cf-a796-64a6e898d1d8","Type":"ContainerStarted","Data":"1ab0d330fb12bc5326d725bac308511aa1fb2faae489b227d65b8cf1e3aa52d5"} Feb 14 04:16:11 crc kubenswrapper[4867]: I0214 04:16:11.266044 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c5a5db44-6c30-46cf-a796-64a6e898d1d8","Type":"ContainerStarted","Data":"566622c854e9cda9094c9653505e2c60d9642f51a39cac4072cb7722d74d89a4"} Feb 14 04:16:11 crc kubenswrapper[4867]: I0214 04:16:11.267799 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd" Feb 14 04:16:11 crc kubenswrapper[4867]: I0214 04:16:11.270007 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" event={"ID":"652d53d9-a4c0-4061-b817-ca5173785521","Type":"ContainerStarted","Data":"075b79918bc2f91b3a5dae96c88d4b1fcea3cd1da542c02c4a8dfaa3b4541715"} Feb 14 04:16:11 crc kubenswrapper[4867]: I0214 04:16:11.273475 4867 generic.go:334] "Generic (PLEG): container finished" podID="62ee3130-2952-453e-82b6-dba068ba1bc9" containerID="6676250e6ab4328a00c955c252f7334c62f0069abe3d9ce15319bd01bbf22dd8" exitCode=0 Feb 14 04:16:11 crc kubenswrapper[4867]: I0214 04:16:11.273539 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"62ee3130-2952-453e-82b6-dba068ba1bc9","Type":"ContainerDied","Data":"6676250e6ab4328a00c955c252f7334c62f0069abe3d9ce15319bd01bbf22dd8"} Feb 14 04:16:11 crc kubenswrapper[4867]: I0214 04:16:11.273588 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"62ee3130-2952-453e-82b6-dba068ba1bc9","Type":"ContainerStarted","Data":"63a220aecf6d9618dc2a4c714dd800d1ef55f79fcac6e58e5693dec9210c1604"} Feb 14 04:16:11 crc kubenswrapper[4867]: I0214 04:16:11.290402 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd" podStartSLOduration=1.8926475379999999 podStartE2EDuration="5.290347025s" podCreationTimestamp="2026-02-14 04:16:06 +0000 UTC" firstStartedPulling="2026-02-14 04:16:07.27449811 +0000 UTC m=+399.355435424" lastFinishedPulling="2026-02-14 04:16:10.672197597 +0000 UTC m=+402.753134911" observedRunningTime="2026-02-14 04:16:11.282064241 +0000 UTC m=+403.363001555" watchObservedRunningTime="2026-02-14 04:16:11.290347025 +0000 UTC m=+403.371284369" Feb 14 04:16:11 crc kubenswrapper[4867]: I0214 04:16:11.377554 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" podStartSLOduration=1.672964886 podStartE2EDuration="5.377533831s" podCreationTimestamp="2026-02-14 04:16:06 +0000 UTC" firstStartedPulling="2026-02-14 04:16:06.958180777 +0000 UTC m=+399.039118091" lastFinishedPulling="2026-02-14 04:16:10.662749722 +0000 UTC m=+402.743687036" observedRunningTime="2026-02-14 04:16:11.376255818 +0000 UTC m=+403.457193142" watchObservedRunningTime="2026-02-14 04:16:11.377533831 +0000 UTC m=+403.458471145" Feb 14 04:16:12 crc kubenswrapper[4867]: I0214 04:16:12.282533 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c5a5db44-6c30-46cf-a796-64a6e898d1d8","Type":"ContainerStarted","Data":"639d21805a94f75183a7db3daa9f3bc373f7cdf3d67020113d1e034c2cf56388"} Feb 14 04:16:12 crc kubenswrapper[4867]: I0214 04:16:12.283286 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c5a5db44-6c30-46cf-a796-64a6e898d1d8","Type":"ContainerStarted","Data":"f01f775fa1b8582cc6203a580eb810f3bc4133698ba910f8a34d5dace4711a59"} Feb 14 04:16:14 crc kubenswrapper[4867]: I0214 04:16:14.579744 4867 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-5rxcg container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.15:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 04:16:14 crc kubenswrapper[4867]: I0214 04:16:14.580456 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-5rxcg" podUID="c029599e-5014-4874-917f-076635849451" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.15:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 04:16:15 crc kubenswrapper[4867]: I0214 04:16:15.307779 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" event={"ID":"72801c86-0365-4e93-8887-4fdc6d8a9cad","Type":"ContainerStarted","Data":"24413e9ce4db97b9a01e5d1bc087f8b72ce77a2da91cb1efbb9dd2aae6bf3986"} Feb 14 04:16:15 crc kubenswrapper[4867]: I0214 04:16:15.312099 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"c5a5db44-6c30-46cf-a796-64a6e898d1d8","Type":"ContainerStarted","Data":"d00b368d905164f5120c48870c7bc64d59c4964cb3f7346655f07db23a4047bd"} Feb 14 04:16:15 crc kubenswrapper[4867]: I0214 04:16:15.314276 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"62ee3130-2952-453e-82b6-dba068ba1bc9","Type":"ContainerStarted","Data":"147c2f8c163d08e9696f10f3abfcd588dd0513b30c08f7e00c51a9d7851cd103"} Feb 14 04:16:15 crc kubenswrapper[4867]: I0214 04:16:15.342753 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=2.037592155 podStartE2EDuration="14.342711387s" podCreationTimestamp="2026-02-14 04:16:01 +0000 UTC" firstStartedPulling="2026-02-14 04:16:02.784256172 +0000 UTC m=+394.865193476" lastFinishedPulling="2026-02-14 04:16:15.089375394 +0000 UTC m=+407.170312708" observedRunningTime="2026-02-14 04:16:15.33898931 +0000 UTC m=+407.419926624" watchObservedRunningTime="2026-02-14 04:16:15.342711387 +0000 UTC m=+407.423648701" Feb 14 04:16:15 crc kubenswrapper[4867]: I0214 04:16:15.957103 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:15 crc kubenswrapper[4867]: I0214 04:16:15.957162 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:15 crc kubenswrapper[4867]: I0214 04:16:15.963649 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:16 crc kubenswrapper[4867]: I0214 04:16:16.322545 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"62ee3130-2952-453e-82b6-dba068ba1bc9","Type":"ContainerStarted","Data":"5d3b9e6890a6983a76b8aaf4fbb189d3b95cb8b346e7654b16efa15fc1727158"} Feb 14 04:16:16 crc kubenswrapper[4867]: I0214 04:16:16.323004 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"62ee3130-2952-453e-82b6-dba068ba1bc9","Type":"ContainerStarted","Data":"c3d8f2697ea91aea780e16dab27e369fe312387513058af3a300f091529a0d05"} Feb 14 04:16:16 crc kubenswrapper[4867]: I0214 04:16:16.323021 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"62ee3130-2952-453e-82b6-dba068ba1bc9","Type":"ContainerStarted","Data":"ce3cf96117dabb896387465fc0c257d4c04299c9b495615f2560e1637d1ca81f"} Feb 14 04:16:16 crc kubenswrapper[4867]: I0214 04:16:16.323032 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"62ee3130-2952-453e-82b6-dba068ba1bc9","Type":"ContainerStarted","Data":"dc7e54770405cf89b69354f3e30c9c2865b6bd4f85f209126b67b26b417b646c"} Feb 14 04:16:16 crc kubenswrapper[4867]: I0214 04:16:16.323043 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"62ee3130-2952-453e-82b6-dba068ba1bc9","Type":"ContainerStarted","Data":"599b4a74cea66c2d77401338e138b4d9b1a9f005f8b2c3f1104caf320d0c7126"} Feb 14 04:16:16 crc kubenswrapper[4867]: I0214 04:16:16.327411 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" event={"ID":"72801c86-0365-4e93-8887-4fdc6d8a9cad","Type":"ContainerStarted","Data":"49d5c225eb2af6354612f9d06ed06b8e4f4d89b994c5f92f22d0ac4184aa978f"} Feb 14 04:16:16 crc kubenswrapper[4867]: I0214 04:16:16.327440 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" event={"ID":"72801c86-0365-4e93-8887-4fdc6d8a9cad","Type":"ContainerStarted","Data":"02dba26e3be94b0469342c8cd74b724969629756f0f4acd78582351c566c3abd"} Feb 14 04:16:16 crc kubenswrapper[4867]: I0214 04:16:16.330888 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:16:16 crc kubenswrapper[4867]: I0214 04:16:16.353585 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=5.534510957 podStartE2EDuration="9.353562954s" podCreationTimestamp="2026-02-14 04:16:07 +0000 UTC" firstStartedPulling="2026-02-14 04:16:11.275930972 +0000 UTC m=+403.356868296" lastFinishedPulling="2026-02-14 04:16:15.094982979 +0000 UTC m=+407.175920293" observedRunningTime="2026-02-14 04:16:16.349281914 +0000 UTC m=+408.430219238" watchObservedRunningTime="2026-02-14 04:16:16.353562954 +0000 UTC m=+408.434500278" Feb 14 04:16:16 crc kubenswrapper[4867]: I0214 04:16:16.398483 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" podStartSLOduration=4.43365389 podStartE2EDuration="14.398457536s" podCreationTimestamp="2026-02-14 04:16:02 +0000 UTC" firstStartedPulling="2026-02-14 04:16:05.118726356 +0000 UTC m=+397.199663680" lastFinishedPulling="2026-02-14 04:16:15.083530012 +0000 UTC m=+407.164467326" observedRunningTime="2026-02-14 04:16:16.392971854 +0000 UTC m=+408.473909168" watchObservedRunningTime="2026-02-14 04:16:16.398457536 +0000 UTC m=+408.479394850" Feb 14 04:16:16 crc kubenswrapper[4867]: I0214 04:16:16.450620 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-c4c52"] Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.004561 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76866bf749-9m2w5"] Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.004761 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" podUID="8708b876-3ece-4820-b4f1-35d9fb2a195c" containerName="controller-manager" containerID="cri-o://abaa323618b879bb61fc24afaa3f869dc0bc36bdaf9414230f2b473467c245b7" gracePeriod=30 Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.017592 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz"] Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.017793 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" podUID="96b49908-c23d-45d6-b7fa-3d718d01ee00" containerName="route-controller-manager" containerID="cri-o://6b1dcdc8ab4882eb0ae66f99651a492e0075228f8a659714df05c3f830d62ae6" gracePeriod=30 Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.335594 4867 generic.go:334] "Generic (PLEG): container finished" podID="96b49908-c23d-45d6-b7fa-3d718d01ee00" containerID="6b1dcdc8ab4882eb0ae66f99651a492e0075228f8a659714df05c3f830d62ae6" exitCode=0 Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.335712 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" event={"ID":"96b49908-c23d-45d6-b7fa-3d718d01ee00","Type":"ContainerDied","Data":"6b1dcdc8ab4882eb0ae66f99651a492e0075228f8a659714df05c3f830d62ae6"} Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.337898 4867 generic.go:334] "Generic (PLEG): container finished" podID="8708b876-3ece-4820-b4f1-35d9fb2a195c" containerID="abaa323618b879bb61fc24afaa3f869dc0bc36bdaf9414230f2b473467c245b7" exitCode=0 Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.338073 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" event={"ID":"8708b876-3ece-4820-b4f1-35d9fb2a195c","Type":"ContainerDied","Data":"abaa323618b879bb61fc24afaa3f869dc0bc36bdaf9414230f2b473467c245b7"} Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.339447 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.352027 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.464393 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.654102 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.663823 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.816607 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shl4z\" (UniqueName: \"kubernetes.io/projected/8708b876-3ece-4820-b4f1-35d9fb2a195c-kube-api-access-shl4z\") pod \"8708b876-3ece-4820-b4f1-35d9fb2a195c\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.816674 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-config\") pod \"8708b876-3ece-4820-b4f1-35d9fb2a195c\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.816694 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96b49908-c23d-45d6-b7fa-3d718d01ee00-client-ca\") pod \"96b49908-c23d-45d6-b7fa-3d718d01ee00\" (UID: \"96b49908-c23d-45d6-b7fa-3d718d01ee00\") " Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.816736 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgmdg\" (UniqueName: \"kubernetes.io/projected/96b49908-c23d-45d6-b7fa-3d718d01ee00-kube-api-access-rgmdg\") pod \"96b49908-c23d-45d6-b7fa-3d718d01ee00\" (UID: \"96b49908-c23d-45d6-b7fa-3d718d01ee00\") " Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.816785 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96b49908-c23d-45d6-b7fa-3d718d01ee00-serving-cert\") pod \"96b49908-c23d-45d6-b7fa-3d718d01ee00\" (UID: \"96b49908-c23d-45d6-b7fa-3d718d01ee00\") " Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.816812 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-client-ca\") pod \"8708b876-3ece-4820-b4f1-35d9fb2a195c\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.816853 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b49908-c23d-45d6-b7fa-3d718d01ee00-config\") pod \"96b49908-c23d-45d6-b7fa-3d718d01ee00\" (UID: \"96b49908-c23d-45d6-b7fa-3d718d01ee00\") " Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.816872 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8708b876-3ece-4820-b4f1-35d9fb2a195c-serving-cert\") pod \"8708b876-3ece-4820-b4f1-35d9fb2a195c\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.816909 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-proxy-ca-bundles\") pod \"8708b876-3ece-4820-b4f1-35d9fb2a195c\" (UID: \"8708b876-3ece-4820-b4f1-35d9fb2a195c\") " Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.817644 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8708b876-3ece-4820-b4f1-35d9fb2a195c" (UID: "8708b876-3ece-4820-b4f1-35d9fb2a195c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.817792 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-config" (OuterVolumeSpecName: "config") pod "8708b876-3ece-4820-b4f1-35d9fb2a195c" (UID: "8708b876-3ece-4820-b4f1-35d9fb2a195c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.817902 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-client-ca" (OuterVolumeSpecName: "client-ca") pod "8708b876-3ece-4820-b4f1-35d9fb2a195c" (UID: "8708b876-3ece-4820-b4f1-35d9fb2a195c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.818261 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96b49908-c23d-45d6-b7fa-3d718d01ee00-config" (OuterVolumeSpecName: "config") pod "96b49908-c23d-45d6-b7fa-3d718d01ee00" (UID: "96b49908-c23d-45d6-b7fa-3d718d01ee00"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.818497 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96b49908-c23d-45d6-b7fa-3d718d01ee00-client-ca" (OuterVolumeSpecName: "client-ca") pod "96b49908-c23d-45d6-b7fa-3d718d01ee00" (UID: "96b49908-c23d-45d6-b7fa-3d718d01ee00"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.822590 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b49908-c23d-45d6-b7fa-3d718d01ee00-kube-api-access-rgmdg" (OuterVolumeSpecName: "kube-api-access-rgmdg") pod "96b49908-c23d-45d6-b7fa-3d718d01ee00" (UID: "96b49908-c23d-45d6-b7fa-3d718d01ee00"). InnerVolumeSpecName "kube-api-access-rgmdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.822687 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b49908-c23d-45d6-b7fa-3d718d01ee00-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "96b49908-c23d-45d6-b7fa-3d718d01ee00" (UID: "96b49908-c23d-45d6-b7fa-3d718d01ee00"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.822990 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8708b876-3ece-4820-b4f1-35d9fb2a195c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8708b876-3ece-4820-b4f1-35d9fb2a195c" (UID: "8708b876-3ece-4820-b4f1-35d9fb2a195c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.830639 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8708b876-3ece-4820-b4f1-35d9fb2a195c-kube-api-access-shl4z" (OuterVolumeSpecName: "kube-api-access-shl4z") pod "8708b876-3ece-4820-b4f1-35d9fb2a195c" (UID: "8708b876-3ece-4820-b4f1-35d9fb2a195c"). InnerVolumeSpecName "kube-api-access-shl4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.919161 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shl4z\" (UniqueName: \"kubernetes.io/projected/8708b876-3ece-4820-b4f1-35d9fb2a195c-kube-api-access-shl4z\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.919220 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.919235 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96b49908-c23d-45d6-b7fa-3d718d01ee00-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.919246 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgmdg\" (UniqueName: \"kubernetes.io/projected/96b49908-c23d-45d6-b7fa-3d718d01ee00-kube-api-access-rgmdg\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.919260 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96b49908-c23d-45d6-b7fa-3d718d01ee00-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.919273 4867 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.919282 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b49908-c23d-45d6-b7fa-3d718d01ee00-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.919291 4867 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8708b876-3ece-4820-b4f1-35d9fb2a195c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:17 crc kubenswrapper[4867]: I0214 04:16:17.919300 4867 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8708b876-3ece-4820-b4f1-35d9fb2a195c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.352726 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" event={"ID":"96b49908-c23d-45d6-b7fa-3d718d01ee00","Type":"ContainerDied","Data":"36ca2d37b0192cdee33dc6fe36ba136f75d321a0564771f7e8b3c2c82c2a9e3c"} Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.352814 4867 scope.go:117] "RemoveContainer" containerID="6b1dcdc8ab4882eb0ae66f99651a492e0075228f8a659714df05c3f830d62ae6" Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.352828 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz" Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.356231 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" event={"ID":"8708b876-3ece-4820-b4f1-35d9fb2a195c","Type":"ContainerDied","Data":"d45331f7f516f685e06d725fb6651d41df87d69b6bbe0b5ca1d4db8536a8773c"} Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.356301 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-76866bf749-9m2w5" Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.372797 4867 scope.go:117] "RemoveContainer" containerID="abaa323618b879bb61fc24afaa3f869dc0bc36bdaf9414230f2b473467c245b7" Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.399720 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-76866bf749-9m2w5"] Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.406081 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-76866bf749-9m2w5"] Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.410676 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz"] Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.413736 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-658bcc664-kwbrz"] Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.988123 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8"] Feb 14 04:16:18 crc kubenswrapper[4867]: E0214 04:16:18.988372 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8708b876-3ece-4820-b4f1-35d9fb2a195c" containerName="controller-manager" Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.988387 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8708b876-3ece-4820-b4f1-35d9fb2a195c" containerName="controller-manager" Feb 14 04:16:18 crc kubenswrapper[4867]: E0214 04:16:18.988405 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b49908-c23d-45d6-b7fa-3d718d01ee00" containerName="route-controller-manager" Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.988411 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b49908-c23d-45d6-b7fa-3d718d01ee00" containerName="route-controller-manager" Feb 14 04:16:18 crc kubenswrapper[4867]: E0214 04:16:18.988425 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c029599e-5014-4874-917f-076635849451" containerName="registry" Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.988431 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c029599e-5014-4874-917f-076635849451" containerName="registry" Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.988557 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8708b876-3ece-4820-b4f1-35d9fb2a195c" containerName="controller-manager" Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.988570 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="96b49908-c23d-45d6-b7fa-3d718d01ee00" containerName="route-controller-manager" Feb 14 04:16:18 crc kubenswrapper[4867]: I0214 04:16:18.988578 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c029599e-5014-4874-917f-076635849451" containerName="registry" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:18.988987 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:18.992305 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-574c444545-stzjc"] Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:18.992921 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:18.993141 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:18.993298 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:18.993353 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:18.993698 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:18.993990 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:18.994317 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:18.994849 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.003815 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.004800 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.005103 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.005392 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.005455 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.017236 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.023309 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8708b876-3ece-4820-b4f1-35d9fb2a195c" path="/var/lib/kubelet/pods/8708b876-3ece-4820-b4f1-35d9fb2a195c/volumes" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.023905 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b49908-c23d-45d6-b7fa-3d718d01ee00" path="/var/lib/kubelet/pods/96b49908-c23d-45d6-b7fa-3d718d01ee00/volumes" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.024735 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-574c444545-stzjc"] Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.027678 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8"] Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.138089 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8q4n\" (UniqueName: \"kubernetes.io/projected/29172228-9eb8-461f-8f75-cdd021e0d30c-kube-api-access-k8q4n\") pod \"route-controller-manager-7575f7b945-9zbh8\" (UID: \"29172228-9eb8-461f-8f75-cdd021e0d30c\") " pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.138194 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29172228-9eb8-461f-8f75-cdd021e0d30c-client-ca\") pod \"route-controller-manager-7575f7b945-9zbh8\" (UID: \"29172228-9eb8-461f-8f75-cdd021e0d30c\") " pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.138279 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9fc9dc1-437a-4160-b805-fabfd7f877c2-client-ca\") pod \"controller-manager-574c444545-stzjc\" (UID: \"a9fc9dc1-437a-4160-b805-fabfd7f877c2\") " pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.138323 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29172228-9eb8-461f-8f75-cdd021e0d30c-serving-cert\") pod \"route-controller-manager-7575f7b945-9zbh8\" (UID: \"29172228-9eb8-461f-8f75-cdd021e0d30c\") " pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.138373 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29172228-9eb8-461f-8f75-cdd021e0d30c-config\") pod \"route-controller-manager-7575f7b945-9zbh8\" (UID: \"29172228-9eb8-461f-8f75-cdd021e0d30c\") " pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.138407 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9fc9dc1-437a-4160-b805-fabfd7f877c2-serving-cert\") pod \"controller-manager-574c444545-stzjc\" (UID: \"a9fc9dc1-437a-4160-b805-fabfd7f877c2\") " pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.138435 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9fc9dc1-437a-4160-b805-fabfd7f877c2-proxy-ca-bundles\") pod \"controller-manager-574c444545-stzjc\" (UID: \"a9fc9dc1-437a-4160-b805-fabfd7f877c2\") " pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.138474 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc495\" (UniqueName: \"kubernetes.io/projected/a9fc9dc1-437a-4160-b805-fabfd7f877c2-kube-api-access-cc495\") pod \"controller-manager-574c444545-stzjc\" (UID: \"a9fc9dc1-437a-4160-b805-fabfd7f877c2\") " pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.138573 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9fc9dc1-437a-4160-b805-fabfd7f877c2-config\") pod \"controller-manager-574c444545-stzjc\" (UID: \"a9fc9dc1-437a-4160-b805-fabfd7f877c2\") " pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.240911 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9fc9dc1-437a-4160-b805-fabfd7f877c2-config\") pod \"controller-manager-574c444545-stzjc\" (UID: \"a9fc9dc1-437a-4160-b805-fabfd7f877c2\") " pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.241028 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8q4n\" (UniqueName: \"kubernetes.io/projected/29172228-9eb8-461f-8f75-cdd021e0d30c-kube-api-access-k8q4n\") pod \"route-controller-manager-7575f7b945-9zbh8\" (UID: \"29172228-9eb8-461f-8f75-cdd021e0d30c\") " pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.241120 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29172228-9eb8-461f-8f75-cdd021e0d30c-client-ca\") pod \"route-controller-manager-7575f7b945-9zbh8\" (UID: \"29172228-9eb8-461f-8f75-cdd021e0d30c\") " pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.241202 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9fc9dc1-437a-4160-b805-fabfd7f877c2-client-ca\") pod \"controller-manager-574c444545-stzjc\" (UID: \"a9fc9dc1-437a-4160-b805-fabfd7f877c2\") " pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.241256 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29172228-9eb8-461f-8f75-cdd021e0d30c-serving-cert\") pod \"route-controller-manager-7575f7b945-9zbh8\" (UID: \"29172228-9eb8-461f-8f75-cdd021e0d30c\") " pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.241342 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29172228-9eb8-461f-8f75-cdd021e0d30c-config\") pod \"route-controller-manager-7575f7b945-9zbh8\" (UID: \"29172228-9eb8-461f-8f75-cdd021e0d30c\") " pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.241395 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9fc9dc1-437a-4160-b805-fabfd7f877c2-serving-cert\") pod \"controller-manager-574c444545-stzjc\" (UID: \"a9fc9dc1-437a-4160-b805-fabfd7f877c2\") " pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.241439 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9fc9dc1-437a-4160-b805-fabfd7f877c2-proxy-ca-bundles\") pod \"controller-manager-574c444545-stzjc\" (UID: \"a9fc9dc1-437a-4160-b805-fabfd7f877c2\") " pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.241545 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc495\" (UniqueName: \"kubernetes.io/projected/a9fc9dc1-437a-4160-b805-fabfd7f877c2-kube-api-access-cc495\") pod \"controller-manager-574c444545-stzjc\" (UID: \"a9fc9dc1-437a-4160-b805-fabfd7f877c2\") " pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.243222 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9fc9dc1-437a-4160-b805-fabfd7f877c2-config\") pod \"controller-manager-574c444545-stzjc\" (UID: \"a9fc9dc1-437a-4160-b805-fabfd7f877c2\") " pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.246371 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9fc9dc1-437a-4160-b805-fabfd7f877c2-client-ca\") pod \"controller-manager-574c444545-stzjc\" (UID: \"a9fc9dc1-437a-4160-b805-fabfd7f877c2\") " pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.247712 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29172228-9eb8-461f-8f75-cdd021e0d30c-config\") pod \"route-controller-manager-7575f7b945-9zbh8\" (UID: \"29172228-9eb8-461f-8f75-cdd021e0d30c\") " pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.248021 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/29172228-9eb8-461f-8f75-cdd021e0d30c-client-ca\") pod \"route-controller-manager-7575f7b945-9zbh8\" (UID: \"29172228-9eb8-461f-8f75-cdd021e0d30c\") " pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.250415 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9fc9dc1-437a-4160-b805-fabfd7f877c2-proxy-ca-bundles\") pod \"controller-manager-574c444545-stzjc\" (UID: \"a9fc9dc1-437a-4160-b805-fabfd7f877c2\") " pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.257758 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29172228-9eb8-461f-8f75-cdd021e0d30c-serving-cert\") pod \"route-controller-manager-7575f7b945-9zbh8\" (UID: \"29172228-9eb8-461f-8f75-cdd021e0d30c\") " pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.261370 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8q4n\" (UniqueName: \"kubernetes.io/projected/29172228-9eb8-461f-8f75-cdd021e0d30c-kube-api-access-k8q4n\") pod \"route-controller-manager-7575f7b945-9zbh8\" (UID: \"29172228-9eb8-461f-8f75-cdd021e0d30c\") " pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.261440 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9fc9dc1-437a-4160-b805-fabfd7f877c2-serving-cert\") pod \"controller-manager-574c444545-stzjc\" (UID: \"a9fc9dc1-437a-4160-b805-fabfd7f877c2\") " pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.264375 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc495\" (UniqueName: \"kubernetes.io/projected/a9fc9dc1-437a-4160-b805-fabfd7f877c2-kube-api-access-cc495\") pod \"controller-manager-574c444545-stzjc\" (UID: \"a9fc9dc1-437a-4160-b805-fabfd7f877c2\") " pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.339164 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.341939 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.752375 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8"] Feb 14 04:16:19 crc kubenswrapper[4867]: W0214 04:16:19.761816 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29172228_9eb8_461f_8f75_cdd021e0d30c.slice/crio-0bf4f8b0d07802e8b94543db21cd34c5b22cce6586e64afbd6c096ec6e7aa112 WatchSource:0}: Error finding container 0bf4f8b0d07802e8b94543db21cd34c5b22cce6586e64afbd6c096ec6e7aa112: Status 404 returned error can't find the container with id 0bf4f8b0d07802e8b94543db21cd34c5b22cce6586e64afbd6c096ec6e7aa112 Feb 14 04:16:19 crc kubenswrapper[4867]: I0214 04:16:19.933308 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-574c444545-stzjc"] Feb 14 04:16:19 crc kubenswrapper[4867]: W0214 04:16:19.934361 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9fc9dc1_437a_4160_b805_fabfd7f877c2.slice/crio-05d30f129c010d9463418ba8920f196b29c46fb2a634f4475dfe9b2bf1a97a8f WatchSource:0}: Error finding container 05d30f129c010d9463418ba8920f196b29c46fb2a634f4475dfe9b2bf1a97a8f: Status 404 returned error can't find the container with id 05d30f129c010d9463418ba8920f196b29c46fb2a634f4475dfe9b2bf1a97a8f Feb 14 04:16:20 crc kubenswrapper[4867]: I0214 04:16:20.382566 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" event={"ID":"29172228-9eb8-461f-8f75-cdd021e0d30c","Type":"ContainerStarted","Data":"b2b4d86a5abf177e594abdba567dce9b2b749401c08580b54c991a839d54dc2c"} Feb 14 04:16:20 crc kubenswrapper[4867]: I0214 04:16:20.382997 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 04:16:20 crc kubenswrapper[4867]: I0214 04:16:20.383011 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" event={"ID":"29172228-9eb8-461f-8f75-cdd021e0d30c","Type":"ContainerStarted","Data":"0bf4f8b0d07802e8b94543db21cd34c5b22cce6586e64afbd6c096ec6e7aa112"} Feb 14 04:16:20 crc kubenswrapper[4867]: I0214 04:16:20.385840 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" event={"ID":"a9fc9dc1-437a-4160-b805-fabfd7f877c2","Type":"ContainerStarted","Data":"8ea3d56833a0efa19ba33e28ae9cc5702afdb9a3c57db5fa754cb3ed8734293a"} Feb 14 04:16:20 crc kubenswrapper[4867]: I0214 04:16:20.385902 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" event={"ID":"a9fc9dc1-437a-4160-b805-fabfd7f877c2","Type":"ContainerStarted","Data":"05d30f129c010d9463418ba8920f196b29c46fb2a634f4475dfe9b2bf1a97a8f"} Feb 14 04:16:20 crc kubenswrapper[4867]: I0214 04:16:20.386077 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:20 crc kubenswrapper[4867]: I0214 04:16:20.389424 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 04:16:20 crc kubenswrapper[4867]: I0214 04:16:20.392347 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 04:16:20 crc kubenswrapper[4867]: I0214 04:16:20.400871 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" podStartSLOduration=3.4008482239999998 podStartE2EDuration="3.400848224s" podCreationTimestamp="2026-02-14 04:16:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:16:20.400646199 +0000 UTC m=+412.481583503" watchObservedRunningTime="2026-02-14 04:16:20.400848224 +0000 UTC m=+412.481785538" Feb 14 04:16:20 crc kubenswrapper[4867]: I0214 04:16:20.430586 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" podStartSLOduration=3.430567243 podStartE2EDuration="3.430567243s" podCreationTimestamp="2026-02-14 04:16:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:16:20.424556278 +0000 UTC m=+412.505493592" watchObservedRunningTime="2026-02-14 04:16:20.430567243 +0000 UTC m=+412.511504557" Feb 14 04:16:26 crc kubenswrapper[4867]: I0214 04:16:26.449144 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:26 crc kubenswrapper[4867]: I0214 04:16:26.449715 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:31 crc kubenswrapper[4867]: I0214 04:16:31.251473 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:16:31 crc kubenswrapper[4867]: I0214 04:16:31.252172 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:16:31 crc kubenswrapper[4867]: I0214 04:16:31.252265 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:16:31 crc kubenswrapper[4867]: I0214 04:16:31.252937 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a1533900ce1e5bb0e6f304c6961b52011041a6df37ce715de5540edb7f995f66"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 04:16:31 crc kubenswrapper[4867]: I0214 04:16:31.253002 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://a1533900ce1e5bb0e6f304c6961b52011041a6df37ce715de5540edb7f995f66" gracePeriod=600 Feb 14 04:16:31 crc kubenswrapper[4867]: I0214 04:16:31.461379 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="a1533900ce1e5bb0e6f304c6961b52011041a6df37ce715de5540edb7f995f66" exitCode=0 Feb 14 04:16:31 crc kubenswrapper[4867]: I0214 04:16:31.461457 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"a1533900ce1e5bb0e6f304c6961b52011041a6df37ce715de5540edb7f995f66"} Feb 14 04:16:31 crc kubenswrapper[4867]: I0214 04:16:31.461495 4867 scope.go:117] "RemoveContainer" containerID="c06b088007e4cc02eff5f33dffc101f9d559fc0af6d9fc99cb7d1a49c47deec3" Feb 14 04:16:32 crc kubenswrapper[4867]: I0214 04:16:32.471831 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"2de3d61c1f6c01b61b6559aa8687b810bcfdab61e971db1007a35ef4d563c645"} Feb 14 04:16:41 crc kubenswrapper[4867]: I0214 04:16:41.535195 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-c4c52" podUID="bb63883f-65f5-4107-877a-ff786d6c00f9" containerName="console" containerID="cri-o://63e5a177904c856ac44a70adc1fabc18b6435a4f03e0f904c50917bec344fb2b" gracePeriod=15 Feb 14 04:16:41 crc kubenswrapper[4867]: E0214 04:16:41.653098 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb63883f_65f5_4107_877a_ff786d6c00f9.slice/crio-63e5a177904c856ac44a70adc1fabc18b6435a4f03e0f904c50917bec344fb2b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb63883f_65f5_4107_877a_ff786d6c00f9.slice/crio-conmon-63e5a177904c856ac44a70adc1fabc18b6435a4f03e0f904c50917bec344fb2b.scope\": RecentStats: unable to find data in memory cache]" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.030038 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-c4c52_bb63883f-65f5-4107-877a-ff786d6c00f9/console/0.log" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.030107 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.110563 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvv7t\" (UniqueName: \"kubernetes.io/projected/bb63883f-65f5-4107-877a-ff786d6c00f9-kube-api-access-zvv7t\") pod \"bb63883f-65f5-4107-877a-ff786d6c00f9\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.111304 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-trusted-ca-bundle\") pod \"bb63883f-65f5-4107-877a-ff786d6c00f9\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.111422 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-service-ca\") pod \"bb63883f-65f5-4107-877a-ff786d6c00f9\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.111606 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb63883f-65f5-4107-877a-ff786d6c00f9-console-oauth-config\") pod \"bb63883f-65f5-4107-877a-ff786d6c00f9\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.111752 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb63883f-65f5-4107-877a-ff786d6c00f9-console-serving-cert\") pod \"bb63883f-65f5-4107-877a-ff786d6c00f9\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.111801 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-oauth-serving-cert\") pod \"bb63883f-65f5-4107-877a-ff786d6c00f9\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.111850 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-console-config\") pod \"bb63883f-65f5-4107-877a-ff786d6c00f9\" (UID: \"bb63883f-65f5-4107-877a-ff786d6c00f9\") " Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.112076 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-service-ca" (OuterVolumeSpecName: "service-ca") pod "bb63883f-65f5-4107-877a-ff786d6c00f9" (UID: "bb63883f-65f5-4107-877a-ff786d6c00f9"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.112220 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "bb63883f-65f5-4107-877a-ff786d6c00f9" (UID: "bb63883f-65f5-4107-877a-ff786d6c00f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.112695 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-console-config" (OuterVolumeSpecName: "console-config") pod "bb63883f-65f5-4107-877a-ff786d6c00f9" (UID: "bb63883f-65f5-4107-877a-ff786d6c00f9"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.112759 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "bb63883f-65f5-4107-877a-ff786d6c00f9" (UID: "bb63883f-65f5-4107-877a-ff786d6c00f9"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.113535 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.113570 4867 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.113585 4867 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.113595 4867 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb63883f-65f5-4107-877a-ff786d6c00f9-console-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.117732 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb63883f-65f5-4107-877a-ff786d6c00f9-kube-api-access-zvv7t" (OuterVolumeSpecName: "kube-api-access-zvv7t") pod "bb63883f-65f5-4107-877a-ff786d6c00f9" (UID: "bb63883f-65f5-4107-877a-ff786d6c00f9"). InnerVolumeSpecName "kube-api-access-zvv7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.118260 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb63883f-65f5-4107-877a-ff786d6c00f9-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "bb63883f-65f5-4107-877a-ff786d6c00f9" (UID: "bb63883f-65f5-4107-877a-ff786d6c00f9"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.118878 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb63883f-65f5-4107-877a-ff786d6c00f9-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "bb63883f-65f5-4107-877a-ff786d6c00f9" (UID: "bb63883f-65f5-4107-877a-ff786d6c00f9"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.214997 4867 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb63883f-65f5-4107-877a-ff786d6c00f9-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.215062 4867 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb63883f-65f5-4107-877a-ff786d6c00f9-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.215088 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvv7t\" (UniqueName: \"kubernetes.io/projected/bb63883f-65f5-4107-877a-ff786d6c00f9-kube-api-access-zvv7t\") on node \"crc\" DevicePath \"\"" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.549896 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-c4c52_bb63883f-65f5-4107-877a-ff786d6c00f9/console/0.log" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.550285 4867 generic.go:334] "Generic (PLEG): container finished" podID="bb63883f-65f5-4107-877a-ff786d6c00f9" containerID="63e5a177904c856ac44a70adc1fabc18b6435a4f03e0f904c50917bec344fb2b" exitCode=2 Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.550336 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-c4c52" event={"ID":"bb63883f-65f5-4107-877a-ff786d6c00f9","Type":"ContainerDied","Data":"63e5a177904c856ac44a70adc1fabc18b6435a4f03e0f904c50917bec344fb2b"} Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.550373 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-c4c52" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.550401 4867 scope.go:117] "RemoveContainer" containerID="63e5a177904c856ac44a70adc1fabc18b6435a4f03e0f904c50917bec344fb2b" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.550385 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-c4c52" event={"ID":"bb63883f-65f5-4107-877a-ff786d6c00f9","Type":"ContainerDied","Data":"0bfaa5034c5f4aa419ca6cadf9c2423257fac17593840dedc0a8810563cfdfe4"} Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.584364 4867 scope.go:117] "RemoveContainer" containerID="63e5a177904c856ac44a70adc1fabc18b6435a4f03e0f904c50917bec344fb2b" Feb 14 04:16:42 crc kubenswrapper[4867]: E0214 04:16:42.584818 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63e5a177904c856ac44a70adc1fabc18b6435a4f03e0f904c50917bec344fb2b\": container with ID starting with 63e5a177904c856ac44a70adc1fabc18b6435a4f03e0f904c50917bec344fb2b not found: ID does not exist" containerID="63e5a177904c856ac44a70adc1fabc18b6435a4f03e0f904c50917bec344fb2b" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.584859 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63e5a177904c856ac44a70adc1fabc18b6435a4f03e0f904c50917bec344fb2b"} err="failed to get container status \"63e5a177904c856ac44a70adc1fabc18b6435a4f03e0f904c50917bec344fb2b\": rpc error: code = NotFound desc = could not find container \"63e5a177904c856ac44a70adc1fabc18b6435a4f03e0f904c50917bec344fb2b\": container with ID starting with 63e5a177904c856ac44a70adc1fabc18b6435a4f03e0f904c50917bec344fb2b not found: ID does not exist" Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.589353 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-c4c52"] Feb 14 04:16:42 crc kubenswrapper[4867]: I0214 04:16:42.593687 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-c4c52"] Feb 14 04:16:43 crc kubenswrapper[4867]: I0214 04:16:43.011502 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb63883f-65f5-4107-877a-ff786d6c00f9" path="/var/lib/kubelet/pods/bb63883f-65f5-4107-877a-ff786d6c00f9/volumes" Feb 14 04:16:46 crc kubenswrapper[4867]: I0214 04:16:46.460036 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:16:46 crc kubenswrapper[4867]: I0214 04:16:46.468064 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 04:17:07 crc kubenswrapper[4867]: I0214 04:17:07.464708 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:17:07 crc kubenswrapper[4867]: I0214 04:17:07.516327 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:17:07 crc kubenswrapper[4867]: I0214 04:17:07.745466 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.567058 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6687988ff8-hggh9"] Feb 14 04:17:51 crc kubenswrapper[4867]: E0214 04:17:51.568654 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb63883f-65f5-4107-877a-ff786d6c00f9" containerName="console" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.568681 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb63883f-65f5-4107-877a-ff786d6c00f9" containerName="console" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.568959 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb63883f-65f5-4107-877a-ff786d6c00f9" containerName="console" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.569947 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.580944 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6687988ff8-hggh9"] Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.643890 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-service-ca\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.644319 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-config\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.644394 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-oauth-serving-cert\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.644429 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm2vd\" (UniqueName: \"kubernetes.io/projected/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-kube-api-access-pm2vd\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.644456 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-oauth-config\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.644497 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-serving-cert\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.644563 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-trusted-ca-bundle\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.746829 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-service-ca\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.746896 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-config\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.746972 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-oauth-serving-cert\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.747001 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm2vd\" (UniqueName: \"kubernetes.io/projected/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-kube-api-access-pm2vd\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.747027 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-oauth-config\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.747051 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-serving-cert\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.747072 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-trusted-ca-bundle\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.748412 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-trusted-ca-bundle\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.748815 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-service-ca\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.749767 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-config\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.750989 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-oauth-serving-cert\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.756111 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-oauth-config\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.756705 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-serving-cert\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.770340 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm2vd\" (UniqueName: \"kubernetes.io/projected/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-kube-api-access-pm2vd\") pod \"console-6687988ff8-hggh9\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:51 crc kubenswrapper[4867]: I0214 04:17:51.890720 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:17:52 crc kubenswrapper[4867]: I0214 04:17:52.390332 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6687988ff8-hggh9"] Feb 14 04:17:53 crc kubenswrapper[4867]: I0214 04:17:53.020966 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6687988ff8-hggh9" event={"ID":"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6","Type":"ContainerStarted","Data":"3645eb9bc387f910ed152e2f9ff7796cc316b5c34a3967a69643a1f6d547d063"} Feb 14 04:17:53 crc kubenswrapper[4867]: I0214 04:17:53.021537 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6687988ff8-hggh9" event={"ID":"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6","Type":"ContainerStarted","Data":"129cdcd69132d20dcbb1f824da4d34637e927a59f414ddd5999cdc93d09a0538"} Feb 14 04:17:53 crc kubenswrapper[4867]: I0214 04:17:53.045293 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6687988ff8-hggh9" podStartSLOduration=2.045260152 podStartE2EDuration="2.045260152s" podCreationTimestamp="2026-02-14 04:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:17:53.038679661 +0000 UTC m=+505.119616995" watchObservedRunningTime="2026-02-14 04:17:53.045260152 +0000 UTC m=+505.126197466" Feb 14 04:18:01 crc kubenswrapper[4867]: I0214 04:18:01.892034 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:18:01 crc kubenswrapper[4867]: I0214 04:18:01.893302 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:18:01 crc kubenswrapper[4867]: I0214 04:18:01.896294 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:18:02 crc kubenswrapper[4867]: I0214 04:18:02.090650 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:18:02 crc kubenswrapper[4867]: I0214 04:18:02.198003 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7fbfc7fbd4-76v9z"] Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.247334 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7fbfc7fbd4-76v9z" podUID="f77d496b-c6fc-478c-9bf7-7ea59cb3a474" containerName="console" containerID="cri-o://df535b9b85af4492848df019310db3541d82923651d0d5f2862f2b53665e91fd" gracePeriod=15 Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.628757 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7fbfc7fbd4-76v9z_f77d496b-c6fc-478c-9bf7-7ea59cb3a474/console/0.log" Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.629239 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.810581 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-config\") pod \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.810643 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-trusted-ca-bundle\") pod \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.810690 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsfch\" (UniqueName: \"kubernetes.io/projected/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-kube-api-access-wsfch\") pod \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.810877 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-service-ca\") pod \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.810910 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-oauth-serving-cert\") pod \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.810950 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-oauth-config\") pod \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.810973 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-serving-cert\") pod \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\" (UID: \"f77d496b-c6fc-478c-9bf7-7ea59cb3a474\") " Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.811601 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "f77d496b-c6fc-478c-9bf7-7ea59cb3a474" (UID: "f77d496b-c6fc-478c-9bf7-7ea59cb3a474"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.811611 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-service-ca" (OuterVolumeSpecName: "service-ca") pod "f77d496b-c6fc-478c-9bf7-7ea59cb3a474" (UID: "f77d496b-c6fc-478c-9bf7-7ea59cb3a474"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.811877 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f77d496b-c6fc-478c-9bf7-7ea59cb3a474" (UID: "f77d496b-c6fc-478c-9bf7-7ea59cb3a474"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.811956 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-config" (OuterVolumeSpecName: "console-config") pod "f77d496b-c6fc-478c-9bf7-7ea59cb3a474" (UID: "f77d496b-c6fc-478c-9bf7-7ea59cb3a474"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.817244 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "f77d496b-c6fc-478c-9bf7-7ea59cb3a474" (UID: "f77d496b-c6fc-478c-9bf7-7ea59cb3a474"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.817261 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "f77d496b-c6fc-478c-9bf7-7ea59cb3a474" (UID: "f77d496b-c6fc-478c-9bf7-7ea59cb3a474"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.817453 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-kube-api-access-wsfch" (OuterVolumeSpecName: "kube-api-access-wsfch") pod "f77d496b-c6fc-478c-9bf7-7ea59cb3a474" (UID: "f77d496b-c6fc-478c-9bf7-7ea59cb3a474"). InnerVolumeSpecName "kube-api-access-wsfch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.912787 4867 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.912827 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.912838 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsfch\" (UniqueName: \"kubernetes.io/projected/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-kube-api-access-wsfch\") on node \"crc\" DevicePath \"\"" Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.912851 4867 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.912860 4867 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.912869 4867 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:18:27 crc kubenswrapper[4867]: I0214 04:18:27.912879 4867 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f77d496b-c6fc-478c-9bf7-7ea59cb3a474-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:18:28 crc kubenswrapper[4867]: I0214 04:18:28.334110 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7fbfc7fbd4-76v9z_f77d496b-c6fc-478c-9bf7-7ea59cb3a474/console/0.log" Feb 14 04:18:28 crc kubenswrapper[4867]: I0214 04:18:28.334772 4867 generic.go:334] "Generic (PLEG): container finished" podID="f77d496b-c6fc-478c-9bf7-7ea59cb3a474" containerID="df535b9b85af4492848df019310db3541d82923651d0d5f2862f2b53665e91fd" exitCode=2 Feb 14 04:18:28 crc kubenswrapper[4867]: I0214 04:18:28.334849 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7fbfc7fbd4-76v9z" Feb 14 04:18:28 crc kubenswrapper[4867]: I0214 04:18:28.334846 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7fbfc7fbd4-76v9z" event={"ID":"f77d496b-c6fc-478c-9bf7-7ea59cb3a474","Type":"ContainerDied","Data":"df535b9b85af4492848df019310db3541d82923651d0d5f2862f2b53665e91fd"} Feb 14 04:18:28 crc kubenswrapper[4867]: I0214 04:18:28.335067 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7fbfc7fbd4-76v9z" event={"ID":"f77d496b-c6fc-478c-9bf7-7ea59cb3a474","Type":"ContainerDied","Data":"27f66f9acfe9eb8d98daf1aedc7604a2c13203017a16447c28475c04bbfd3cf7"} Feb 14 04:18:28 crc kubenswrapper[4867]: I0214 04:18:28.335116 4867 scope.go:117] "RemoveContainer" containerID="df535b9b85af4492848df019310db3541d82923651d0d5f2862f2b53665e91fd" Feb 14 04:18:28 crc kubenswrapper[4867]: I0214 04:18:28.362339 4867 scope.go:117] "RemoveContainer" containerID="df535b9b85af4492848df019310db3541d82923651d0d5f2862f2b53665e91fd" Feb 14 04:18:28 crc kubenswrapper[4867]: E0214 04:18:28.363075 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df535b9b85af4492848df019310db3541d82923651d0d5f2862f2b53665e91fd\": container with ID starting with df535b9b85af4492848df019310db3541d82923651d0d5f2862f2b53665e91fd not found: ID does not exist" containerID="df535b9b85af4492848df019310db3541d82923651d0d5f2862f2b53665e91fd" Feb 14 04:18:28 crc kubenswrapper[4867]: I0214 04:18:28.363153 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df535b9b85af4492848df019310db3541d82923651d0d5f2862f2b53665e91fd"} err="failed to get container status \"df535b9b85af4492848df019310db3541d82923651d0d5f2862f2b53665e91fd\": rpc error: code = NotFound desc = could not find container \"df535b9b85af4492848df019310db3541d82923651d0d5f2862f2b53665e91fd\": container with ID starting with df535b9b85af4492848df019310db3541d82923651d0d5f2862f2b53665e91fd not found: ID does not exist" Feb 14 04:18:28 crc kubenswrapper[4867]: I0214 04:18:28.386318 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7fbfc7fbd4-76v9z"] Feb 14 04:18:28 crc kubenswrapper[4867]: I0214 04:18:28.395728 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7fbfc7fbd4-76v9z"] Feb 14 04:18:29 crc kubenswrapper[4867]: I0214 04:18:29.012292 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f77d496b-c6fc-478c-9bf7-7ea59cb3a474" path="/var/lib/kubelet/pods/f77d496b-c6fc-478c-9bf7-7ea59cb3a474/volumes" Feb 14 04:18:31 crc kubenswrapper[4867]: I0214 04:18:31.251575 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:18:31 crc kubenswrapper[4867]: I0214 04:18:31.252013 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:19:01 crc kubenswrapper[4867]: I0214 04:19:01.250659 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:19:01 crc kubenswrapper[4867]: I0214 04:19:01.251429 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:19:32 crc kubenswrapper[4867]: I0214 04:19:32.779653 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:19:32 crc kubenswrapper[4867]: I0214 04:19:32.780420 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:19:32 crc kubenswrapper[4867]: I0214 04:19:32.780496 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:19:32 crc kubenswrapper[4867]: I0214 04:19:32.781211 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2de3d61c1f6c01b61b6559aa8687b810bcfdab61e971db1007a35ef4d563c645"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 04:19:32 crc kubenswrapper[4867]: I0214 04:19:32.781281 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://2de3d61c1f6c01b61b6559aa8687b810bcfdab61e971db1007a35ef4d563c645" gracePeriod=600 Feb 14 04:19:33 crc kubenswrapper[4867]: I0214 04:19:33.801269 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="2de3d61c1f6c01b61b6559aa8687b810bcfdab61e971db1007a35ef4d563c645" exitCode=0 Feb 14 04:19:33 crc kubenswrapper[4867]: I0214 04:19:33.801364 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"2de3d61c1f6c01b61b6559aa8687b810bcfdab61e971db1007a35ef4d563c645"} Feb 14 04:19:33 crc kubenswrapper[4867]: I0214 04:19:33.802623 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"51f114f48cb9a2cff6d859aa7aea42ea438df249b54ac2cc89b9fb1c0a39a59a"} Feb 14 04:19:33 crc kubenswrapper[4867]: I0214 04:19:33.802669 4867 scope.go:117] "RemoveContainer" containerID="a1533900ce1e5bb0e6f304c6961b52011041a6df37ce715de5540edb7f995f66" Feb 14 04:19:37 crc kubenswrapper[4867]: I0214 04:19:37.750892 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc"] Feb 14 04:19:37 crc kubenswrapper[4867]: E0214 04:19:37.752251 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f77d496b-c6fc-478c-9bf7-7ea59cb3a474" containerName="console" Feb 14 04:19:37 crc kubenswrapper[4867]: I0214 04:19:37.752272 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f77d496b-c6fc-478c-9bf7-7ea59cb3a474" containerName="console" Feb 14 04:19:37 crc kubenswrapper[4867]: I0214 04:19:37.752449 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f77d496b-c6fc-478c-9bf7-7ea59cb3a474" containerName="console" Feb 14 04:19:37 crc kubenswrapper[4867]: I0214 04:19:37.753740 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" Feb 14 04:19:37 crc kubenswrapper[4867]: I0214 04:19:37.755747 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 14 04:19:37 crc kubenswrapper[4867]: I0214 04:19:37.763216 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc"] Feb 14 04:19:37 crc kubenswrapper[4867]: I0214 04:19:37.904799 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc\" (UID: \"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" Feb 14 04:19:37 crc kubenswrapper[4867]: I0214 04:19:37.904888 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc\" (UID: \"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" Feb 14 04:19:37 crc kubenswrapper[4867]: I0214 04:19:37.904931 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj7hj\" (UniqueName: \"kubernetes.io/projected/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-kube-api-access-hj7hj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc\" (UID: \"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" Feb 14 04:19:38 crc kubenswrapper[4867]: I0214 04:19:38.006161 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc\" (UID: \"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" Feb 14 04:19:38 crc kubenswrapper[4867]: I0214 04:19:38.006226 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj7hj\" (UniqueName: \"kubernetes.io/projected/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-kube-api-access-hj7hj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc\" (UID: \"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" Feb 14 04:19:38 crc kubenswrapper[4867]: I0214 04:19:38.006302 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc\" (UID: \"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" Feb 14 04:19:38 crc kubenswrapper[4867]: I0214 04:19:38.006708 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc\" (UID: \"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" Feb 14 04:19:38 crc kubenswrapper[4867]: I0214 04:19:38.006749 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc\" (UID: \"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" Feb 14 04:19:38 crc kubenswrapper[4867]: I0214 04:19:38.026253 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj7hj\" (UniqueName: \"kubernetes.io/projected/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-kube-api-access-hj7hj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc\" (UID: \"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" Feb 14 04:19:38 crc kubenswrapper[4867]: I0214 04:19:38.080587 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" Feb 14 04:19:38 crc kubenswrapper[4867]: I0214 04:19:38.546098 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc"] Feb 14 04:19:38 crc kubenswrapper[4867]: I0214 04:19:38.850084 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" event={"ID":"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4","Type":"ContainerStarted","Data":"af6533f1682e3e0b3d048ad1f8c7ab5aacdb579600593234a994eb4d881560e2"} Feb 14 04:19:38 crc kubenswrapper[4867]: I0214 04:19:38.850928 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" event={"ID":"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4","Type":"ContainerStarted","Data":"6f56f46b17695aa14bb1ca7f77fe9bea2339ea43a76ca69379bdab2ff52084f5"} Feb 14 04:19:39 crc kubenswrapper[4867]: I0214 04:19:39.865896 4867 generic.go:334] "Generic (PLEG): container finished" podID="2d5a082b-f5f1-4a9d-be2a-31df6953a4a4" containerID="af6533f1682e3e0b3d048ad1f8c7ab5aacdb579600593234a994eb4d881560e2" exitCode=0 Feb 14 04:19:39 crc kubenswrapper[4867]: I0214 04:19:39.865984 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" event={"ID":"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4","Type":"ContainerDied","Data":"af6533f1682e3e0b3d048ad1f8c7ab5aacdb579600593234a994eb4d881560e2"} Feb 14 04:19:39 crc kubenswrapper[4867]: I0214 04:19:39.869282 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 04:19:41 crc kubenswrapper[4867]: I0214 04:19:41.884731 4867 generic.go:334] "Generic (PLEG): container finished" podID="2d5a082b-f5f1-4a9d-be2a-31df6953a4a4" containerID="99ce3c3b81d9334b837bda835fc6970e3b0d6e93be7564016ddab6611c14d7dc" exitCode=0 Feb 14 04:19:41 crc kubenswrapper[4867]: I0214 04:19:41.885323 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" event={"ID":"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4","Type":"ContainerDied","Data":"99ce3c3b81d9334b837bda835fc6970e3b0d6e93be7564016ddab6611c14d7dc"} Feb 14 04:19:42 crc kubenswrapper[4867]: I0214 04:19:42.897139 4867 generic.go:334] "Generic (PLEG): container finished" podID="2d5a082b-f5f1-4a9d-be2a-31df6953a4a4" containerID="628903026be532a3bac7ed17fd2ccb0174f67522fb4dc5532429553a1a26adf4" exitCode=0 Feb 14 04:19:42 crc kubenswrapper[4867]: I0214 04:19:42.897335 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" event={"ID":"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4","Type":"ContainerDied","Data":"628903026be532a3bac7ed17fd2ccb0174f67522fb4dc5532429553a1a26adf4"} Feb 14 04:19:44 crc kubenswrapper[4867]: I0214 04:19:44.214531 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" Feb 14 04:19:44 crc kubenswrapper[4867]: I0214 04:19:44.236542 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-util\") pod \"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4\" (UID: \"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4\") " Feb 14 04:19:44 crc kubenswrapper[4867]: I0214 04:19:44.236605 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj7hj\" (UniqueName: \"kubernetes.io/projected/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-kube-api-access-hj7hj\") pod \"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4\" (UID: \"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4\") " Feb 14 04:19:44 crc kubenswrapper[4867]: I0214 04:19:44.236684 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-bundle\") pod \"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4\" (UID: \"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4\") " Feb 14 04:19:44 crc kubenswrapper[4867]: I0214 04:19:44.241919 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-bundle" (OuterVolumeSpecName: "bundle") pod "2d5a082b-f5f1-4a9d-be2a-31df6953a4a4" (UID: "2d5a082b-f5f1-4a9d-be2a-31df6953a4a4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:19:44 crc kubenswrapper[4867]: I0214 04:19:44.243797 4867 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:44 crc kubenswrapper[4867]: I0214 04:19:44.259119 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-kube-api-access-hj7hj" (OuterVolumeSpecName: "kube-api-access-hj7hj") pod "2d5a082b-f5f1-4a9d-be2a-31df6953a4a4" (UID: "2d5a082b-f5f1-4a9d-be2a-31df6953a4a4"). InnerVolumeSpecName "kube-api-access-hj7hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:19:44 crc kubenswrapper[4867]: I0214 04:19:44.345557 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj7hj\" (UniqueName: \"kubernetes.io/projected/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-kube-api-access-hj7hj\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:44 crc kubenswrapper[4867]: I0214 04:19:44.535987 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-util" (OuterVolumeSpecName: "util") pod "2d5a082b-f5f1-4a9d-be2a-31df6953a4a4" (UID: "2d5a082b-f5f1-4a9d-be2a-31df6953a4a4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:19:44 crc kubenswrapper[4867]: I0214 04:19:44.549351 4867 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d5a082b-f5f1-4a9d-be2a-31df6953a4a4-util\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:44 crc kubenswrapper[4867]: I0214 04:19:44.920254 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" event={"ID":"2d5a082b-f5f1-4a9d-be2a-31df6953a4a4","Type":"ContainerDied","Data":"6f56f46b17695aa14bb1ca7f77fe9bea2339ea43a76ca69379bdab2ff52084f5"} Feb 14 04:19:44 crc kubenswrapper[4867]: I0214 04:19:44.920310 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f56f46b17695aa14bb1ca7f77fe9bea2339ea43a76ca69379bdab2ff52084f5" Feb 14 04:19:44 crc kubenswrapper[4867]: I0214 04:19:44.920403 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc" Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.759322 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6nndn"] Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.761116 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovn-controller" containerID="cri-o://e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e" gracePeriod=30 Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.761243 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307" gracePeriod=30 Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.761292 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="kube-rbac-proxy-node" containerID="cri-o://92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18" gracePeriod=30 Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.761372 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="northd" containerID="cri-o://d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633" gracePeriod=30 Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.761237 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="nbdb" containerID="cri-o://ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd" gracePeriod=30 Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.761329 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="sbdb" containerID="cri-o://b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5" gracePeriod=30 Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.761364 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovn-acl-logging" containerID="cri-o://669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6" gracePeriod=30 Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.793678 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovnkube-controller" containerID="cri-o://e1b94247074b50625f56bc042c6a881f72145192ce803fa834d64741635d9444" gracePeriod=30 Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.950902 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovnkube-controller/3.log" Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.958992 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovn-acl-logging/0.log" Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.960731 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovn-controller/0.log" Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.964004 4867 generic.go:334] "Generic (PLEG): container finished" podID="34391a30-5865-46e9-af5f-705cc3b11fba" containerID="669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6" exitCode=143 Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.964110 4867 generic.go:334] "Generic (PLEG): container finished" podID="34391a30-5865-46e9-af5f-705cc3b11fba" containerID="e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e" exitCode=143 Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.964100 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerDied","Data":"669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6"} Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.964316 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerDied","Data":"e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e"} Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.970764 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fl729_fb77d03e-6ead-48b5-a96a-db4cbd540192/kube-multus/2.log" Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.971670 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fl729_fb77d03e-6ead-48b5-a96a-db4cbd540192/kube-multus/1.log" Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.971744 4867 generic.go:334] "Generic (PLEG): container finished" podID="fb77d03e-6ead-48b5-a96a-db4cbd540192" containerID="b07a230a65d345e7f64ecb41b905a120a6174dc5229f73c67b086608b27b5a72" exitCode=2 Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.971792 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fl729" event={"ID":"fb77d03e-6ead-48b5-a96a-db4cbd540192","Type":"ContainerDied","Data":"b07a230a65d345e7f64ecb41b905a120a6174dc5229f73c67b086608b27b5a72"} Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.971848 4867 scope.go:117] "RemoveContainer" containerID="2556cf2433d1b1241d711139b8c66aabe3f12046f37c0f19b972b8306ff7917b" Feb 14 04:19:48 crc kubenswrapper[4867]: I0214 04:19:48.973325 4867 scope.go:117] "RemoveContainer" containerID="b07a230a65d345e7f64ecb41b905a120a6174dc5229f73c67b086608b27b5a72" Feb 14 04:19:48 crc kubenswrapper[4867]: E0214 04:19:48.973839 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-fl729_openshift-multus(fb77d03e-6ead-48b5-a96a-db4cbd540192)\"" pod="openshift-multus/multus-fl729" podUID="fb77d03e-6ead-48b5-a96a-db4cbd540192" Feb 14 04:19:49 crc kubenswrapper[4867]: I0214 04:19:49.983967 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovnkube-controller/3.log" Feb 14 04:19:49 crc kubenswrapper[4867]: I0214 04:19:49.987410 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovn-acl-logging/0.log" Feb 14 04:19:49 crc kubenswrapper[4867]: I0214 04:19:49.987972 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovn-controller/0.log" Feb 14 04:19:49 crc kubenswrapper[4867]: I0214 04:19:49.988550 4867 generic.go:334] "Generic (PLEG): container finished" podID="34391a30-5865-46e9-af5f-705cc3b11fba" containerID="e1b94247074b50625f56bc042c6a881f72145192ce803fa834d64741635d9444" exitCode=0 Feb 14 04:19:49 crc kubenswrapper[4867]: I0214 04:19:49.988587 4867 generic.go:334] "Generic (PLEG): container finished" podID="34391a30-5865-46e9-af5f-705cc3b11fba" containerID="b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5" exitCode=0 Feb 14 04:19:49 crc kubenswrapper[4867]: I0214 04:19:49.988599 4867 generic.go:334] "Generic (PLEG): container finished" podID="34391a30-5865-46e9-af5f-705cc3b11fba" containerID="ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd" exitCode=0 Feb 14 04:19:49 crc kubenswrapper[4867]: I0214 04:19:49.988609 4867 generic.go:334] "Generic (PLEG): container finished" podID="34391a30-5865-46e9-af5f-705cc3b11fba" containerID="d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633" exitCode=0 Feb 14 04:19:49 crc kubenswrapper[4867]: I0214 04:19:49.988618 4867 generic.go:334] "Generic (PLEG): container finished" podID="34391a30-5865-46e9-af5f-705cc3b11fba" containerID="92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18" exitCode=0 Feb 14 04:19:49 crc kubenswrapper[4867]: I0214 04:19:49.988609 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerDied","Data":"e1b94247074b50625f56bc042c6a881f72145192ce803fa834d64741635d9444"} Feb 14 04:19:49 crc kubenswrapper[4867]: I0214 04:19:49.988683 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerDied","Data":"b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5"} Feb 14 04:19:49 crc kubenswrapper[4867]: I0214 04:19:49.988703 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerDied","Data":"ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd"} Feb 14 04:19:49 crc kubenswrapper[4867]: I0214 04:19:49.988714 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerDied","Data":"d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633"} Feb 14 04:19:49 crc kubenswrapper[4867]: I0214 04:19:49.988731 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerDied","Data":"92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18"} Feb 14 04:19:49 crc kubenswrapper[4867]: I0214 04:19:49.988767 4867 scope.go:117] "RemoveContainer" containerID="97e1fa8b3d99d969cac9ac1d4bdd1161353186d3cf50512e692adeee0f21778a" Feb 14 04:19:49 crc kubenswrapper[4867]: I0214 04:19:49.991448 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fl729_fb77d03e-6ead-48b5-a96a-db4cbd540192/kube-multus/2.log" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.087887 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovn-acl-logging/0.log" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.088559 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovn-controller/0.log" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.089341 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.222276 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c58t7"] Feb 14 04:19:50 crc kubenswrapper[4867]: E0214 04:19:50.222783 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovnkube-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.222811 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovnkube-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: E0214 04:19:50.222825 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="kube-rbac-proxy-node" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.222835 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="kube-rbac-proxy-node" Feb 14 04:19:50 crc kubenswrapper[4867]: E0214 04:19:50.222846 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="sbdb" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.222853 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="sbdb" Feb 14 04:19:50 crc kubenswrapper[4867]: E0214 04:19:50.222891 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovn-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.222899 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovn-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: E0214 04:19:50.222914 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="kubecfg-setup" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.222922 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="kubecfg-setup" Feb 14 04:19:50 crc kubenswrapper[4867]: E0214 04:19:50.222936 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovnkube-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.222944 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovnkube-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: E0214 04:19:50.222960 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="nbdb" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.222969 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="nbdb" Feb 14 04:19:50 crc kubenswrapper[4867]: E0214 04:19:50.222984 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d5a082b-f5f1-4a9d-be2a-31df6953a4a4" containerName="extract" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.222991 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d5a082b-f5f1-4a9d-be2a-31df6953a4a4" containerName="extract" Feb 14 04:19:50 crc kubenswrapper[4867]: E0214 04:19:50.223002 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovnkube-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223011 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovnkube-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: E0214 04:19:50.223019 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="kube-rbac-proxy-ovn-metrics" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223026 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="kube-rbac-proxy-ovn-metrics" Feb 14 04:19:50 crc kubenswrapper[4867]: E0214 04:19:50.223034 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d5a082b-f5f1-4a9d-be2a-31df6953a4a4" containerName="util" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223042 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d5a082b-f5f1-4a9d-be2a-31df6953a4a4" containerName="util" Feb 14 04:19:50 crc kubenswrapper[4867]: E0214 04:19:50.223054 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d5a082b-f5f1-4a9d-be2a-31df6953a4a4" containerName="pull" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223061 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d5a082b-f5f1-4a9d-be2a-31df6953a4a4" containerName="pull" Feb 14 04:19:50 crc kubenswrapper[4867]: E0214 04:19:50.223083 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="northd" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223091 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="northd" Feb 14 04:19:50 crc kubenswrapper[4867]: E0214 04:19:50.223101 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovn-acl-logging" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223109 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovn-acl-logging" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223262 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="kube-rbac-proxy-ovn-metrics" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223280 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="nbdb" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223295 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovnkube-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223305 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovnkube-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223316 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="sbdb" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223326 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d5a082b-f5f1-4a9d-be2a-31df6953a4a4" containerName="extract" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223335 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="kube-rbac-proxy-node" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223345 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovnkube-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223356 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovn-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223369 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="northd" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223396 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovnkube-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223406 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovn-acl-logging" Feb 14 04:19:50 crc kubenswrapper[4867]: E0214 04:19:50.223548 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovnkube-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223558 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovnkube-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: E0214 04:19:50.223576 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovnkube-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223584 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovnkube-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.223702 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" containerName="ovnkube-controller" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.225797 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.242323 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-run-netns\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.242369 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-openvswitch\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.242416 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-cni-bin\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.242448 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-env-overrides\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.242454 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.242532 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.242538 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-cni-netd\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.242569 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-systemd\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.242570 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.242593 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-slash\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.242639 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-slash" (OuterVolumeSpecName: "host-slash") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.242681 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.242685 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-ovn\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.242714 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.242746 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmqj7\" (UniqueName: \"kubernetes.io/projected/34391a30-5865-46e9-af5f-705cc3b11fba-kube-api-access-kmqj7\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243022 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243212 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-log-socket\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243271 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-systemd-units\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243312 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-run-ovn-kubernetes\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243342 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-var-lib-openvswitch\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243366 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-kubelet\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243391 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/34391a30-5865-46e9-af5f-705cc3b11fba-ovn-node-metrics-cert\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243412 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-ovnkube-config\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243468 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-ovnkube-script-lib\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243490 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-etc-openvswitch\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243526 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243540 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-node-log\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243554 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-log-socket" (OuterVolumeSpecName: "log-socket") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243568 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-var-lib-cni-networks-ovn-kubernetes\") pod \"34391a30-5865-46e9-af5f-705cc3b11fba\" (UID: \"34391a30-5865-46e9-af5f-705cc3b11fba\") " Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243578 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243597 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243735 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6b78a78d-1660-47ec-a3c6-b826a798ef37-env-overrides\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243794 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-kubelet\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243822 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6b78a78d-1660-47ec-a3c6-b826a798ef37-ovn-node-metrics-cert\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243809 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243855 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-node-log\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243877 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-run-netns\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243887 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243936 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243941 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-cni-netd\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.243999 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244028 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-node-log" (OuterVolumeSpecName: "node-log") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244085 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-var-lib-openvswitch\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244109 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-run-ovn\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244135 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-run-openvswitch\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244154 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-run-systemd\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244170 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-log-socket\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244205 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244247 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-systemd-units\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244270 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6b78a78d-1660-47ec-a3c6-b826a798ef37-ovnkube-config\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244287 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-cni-bin\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244311 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-run-ovn-kubernetes\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244332 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244338 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6b78a78d-1660-47ec-a3c6-b826a798ef37-ovnkube-script-lib\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244417 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-slash\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244479 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-etc-openvswitch\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244523 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl2kq\" (UniqueName: \"kubernetes.io/projected/6b78a78d-1660-47ec-a3c6-b826a798ef37-kube-api-access-bl2kq\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244575 4867 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244587 4867 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244597 4867 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244607 4867 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244616 4867 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244626 4867 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244635 4867 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-slash\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244644 4867 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244653 4867 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-log-socket\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244664 4867 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244675 4867 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244685 4867 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244697 4867 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244706 4867 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244716 4867 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/34391a30-5865-46e9-af5f-705cc3b11fba-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244724 4867 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.244734 4867 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-node-log\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.254006 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34391a30-5865-46e9-af5f-705cc3b11fba-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.264110 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34391a30-5865-46e9-af5f-705cc3b11fba-kube-api-access-kmqj7" (OuterVolumeSpecName: "kube-api-access-kmqj7") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "kube-api-access-kmqj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.273094 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "34391a30-5865-46e9-af5f-705cc3b11fba" (UID: "34391a30-5865-46e9-af5f-705cc3b11fba"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.345887 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6b78a78d-1660-47ec-a3c6-b826a798ef37-ovnkube-script-lib\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.345935 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-slash\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.345969 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-etc-openvswitch\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346060 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-etc-openvswitch\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346085 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-slash\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.345991 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl2kq\" (UniqueName: \"kubernetes.io/projected/6b78a78d-1660-47ec-a3c6-b826a798ef37-kube-api-access-bl2kq\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346197 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6b78a78d-1660-47ec-a3c6-b826a798ef37-env-overrides\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346251 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-kubelet\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346274 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6b78a78d-1660-47ec-a3c6-b826a798ef37-ovn-node-metrics-cert\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346305 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-node-log\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346329 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-run-netns\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346376 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-cni-netd\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346385 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-node-log\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346400 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-kubelet\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346470 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-var-lib-openvswitch\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346497 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-var-lib-openvswitch\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346426 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-cni-netd\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346530 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-run-netns\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346575 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-run-ovn\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346553 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-run-ovn\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346671 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-run-openvswitch\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346694 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-run-systemd\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346746 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-run-openvswitch\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346751 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-log-socket\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346773 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-log-socket\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346801 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-run-systemd\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346823 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346887 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-systemd-units\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346911 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346912 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6b78a78d-1660-47ec-a3c6-b826a798ef37-env-overrides\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346963 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-systemd-units\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.346917 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6b78a78d-1660-47ec-a3c6-b826a798ef37-ovnkube-config\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.347011 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-cni-bin\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.347064 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-run-ovn-kubernetes\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.347209 4867 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/34391a30-5865-46e9-af5f-705cc3b11fba-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.347225 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmqj7\" (UniqueName: \"kubernetes.io/projected/34391a30-5865-46e9-af5f-705cc3b11fba-kube-api-access-kmqj7\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.347215 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-cni-bin\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.347242 4867 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/34391a30-5865-46e9-af5f-705cc3b11fba-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.347264 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b78a78d-1660-47ec-a3c6-b826a798ef37-host-run-ovn-kubernetes\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.347385 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6b78a78d-1660-47ec-a3c6-b826a798ef37-ovnkube-script-lib\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.347743 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6b78a78d-1660-47ec-a3c6-b826a798ef37-ovnkube-config\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.351467 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6b78a78d-1660-47ec-a3c6-b826a798ef37-ovn-node-metrics-cert\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.366205 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl2kq\" (UniqueName: \"kubernetes.io/projected/6b78a78d-1660-47ec-a3c6-b826a798ef37-kube-api-access-bl2kq\") pod \"ovnkube-node-c58t7\" (UID: \"6b78a78d-1660-47ec-a3c6-b826a798ef37\") " pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.539070 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:50 crc kubenswrapper[4867]: W0214 04:19:50.568685 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b78a78d_1660_47ec_a3c6_b826a798ef37.slice/crio-84a3ca4003e7bd652188bd707a7b9ecbfcacc491c3b3b524d767abeb0024f229 WatchSource:0}: Error finding container 84a3ca4003e7bd652188bd707a7b9ecbfcacc491c3b3b524d767abeb0024f229: Status 404 returned error can't find the container with id 84a3ca4003e7bd652188bd707a7b9ecbfcacc491c3b3b524d767abeb0024f229 Feb 14 04:19:50 crc kubenswrapper[4867]: I0214 04:19:50.999164 4867 generic.go:334] "Generic (PLEG): container finished" podID="6b78a78d-1660-47ec-a3c6-b826a798ef37" containerID="b68125d78fd85d06fd9c2b62bbe98e953d49c4fa37c04c665ae8afbcb5398138" exitCode=0 Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.006436 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovn-acl-logging/0.log" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.007449 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6nndn_34391a30-5865-46e9-af5f-705cc3b11fba/ovn-controller/0.log" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.007867 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" event={"ID":"6b78a78d-1660-47ec-a3c6-b826a798ef37","Type":"ContainerDied","Data":"b68125d78fd85d06fd9c2b62bbe98e953d49c4fa37c04c665ae8afbcb5398138"} Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.007922 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" event={"ID":"6b78a78d-1660-47ec-a3c6-b826a798ef37","Type":"ContainerStarted","Data":"84a3ca4003e7bd652188bd707a7b9ecbfcacc491c3b3b524d767abeb0024f229"} Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.008105 4867 generic.go:334] "Generic (PLEG): container finished" podID="34391a30-5865-46e9-af5f-705cc3b11fba" containerID="250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307" exitCode=0 Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.008181 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerDied","Data":"250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307"} Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.008210 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.008254 4867 scope.go:117] "RemoveContainer" containerID="e1b94247074b50625f56bc042c6a881f72145192ce803fa834d64741635d9444" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.008239 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6nndn" event={"ID":"34391a30-5865-46e9-af5f-705cc3b11fba","Type":"ContainerDied","Data":"766035eb89c0e6059ab573e34c9ca67206f8aeefdcb68c749029bbaceeefc307"} Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.050741 4867 scope.go:117] "RemoveContainer" containerID="b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.096498 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6nndn"] Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.105954 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6nndn"] Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.107695 4867 scope.go:117] "RemoveContainer" containerID="ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.128641 4867 scope.go:117] "RemoveContainer" containerID="d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.168871 4867 scope.go:117] "RemoveContainer" containerID="250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.193737 4867 scope.go:117] "RemoveContainer" containerID="92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.216974 4867 scope.go:117] "RemoveContainer" containerID="669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.273664 4867 scope.go:117] "RemoveContainer" containerID="e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.311366 4867 scope.go:117] "RemoveContainer" containerID="cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.334437 4867 scope.go:117] "RemoveContainer" containerID="e1b94247074b50625f56bc042c6a881f72145192ce803fa834d64741635d9444" Feb 14 04:19:51 crc kubenswrapper[4867]: E0214 04:19:51.334917 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1b94247074b50625f56bc042c6a881f72145192ce803fa834d64741635d9444\": container with ID starting with e1b94247074b50625f56bc042c6a881f72145192ce803fa834d64741635d9444 not found: ID does not exist" containerID="e1b94247074b50625f56bc042c6a881f72145192ce803fa834d64741635d9444" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.334955 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1b94247074b50625f56bc042c6a881f72145192ce803fa834d64741635d9444"} err="failed to get container status \"e1b94247074b50625f56bc042c6a881f72145192ce803fa834d64741635d9444\": rpc error: code = NotFound desc = could not find container \"e1b94247074b50625f56bc042c6a881f72145192ce803fa834d64741635d9444\": container with ID starting with e1b94247074b50625f56bc042c6a881f72145192ce803fa834d64741635d9444 not found: ID does not exist" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.334985 4867 scope.go:117] "RemoveContainer" containerID="b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5" Feb 14 04:19:51 crc kubenswrapper[4867]: E0214 04:19:51.335322 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\": container with ID starting with b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5 not found: ID does not exist" containerID="b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.335348 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5"} err="failed to get container status \"b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\": rpc error: code = NotFound desc = could not find container \"b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5\": container with ID starting with b353a2a6ce81989e21b42414fdc2911f63d44fbd94dd8c588a704ae66216d8b5 not found: ID does not exist" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.335364 4867 scope.go:117] "RemoveContainer" containerID="ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd" Feb 14 04:19:51 crc kubenswrapper[4867]: E0214 04:19:51.335611 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\": container with ID starting with ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd not found: ID does not exist" containerID="ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.335634 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd"} err="failed to get container status \"ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\": rpc error: code = NotFound desc = could not find container \"ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd\": container with ID starting with ee3393a612147da0ed4305cb2d2fab51792bf4aefb36be402a6faaa698793cfd not found: ID does not exist" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.335651 4867 scope.go:117] "RemoveContainer" containerID="d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633" Feb 14 04:19:51 crc kubenswrapper[4867]: E0214 04:19:51.336049 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\": container with ID starting with d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633 not found: ID does not exist" containerID="d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.336071 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633"} err="failed to get container status \"d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\": rpc error: code = NotFound desc = could not find container \"d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633\": container with ID starting with d9937714cb48d5e8bc3542473d8261629ced25c342c26baa13e57c3dc2ace633 not found: ID does not exist" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.336091 4867 scope.go:117] "RemoveContainer" containerID="250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307" Feb 14 04:19:51 crc kubenswrapper[4867]: E0214 04:19:51.336340 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\": container with ID starting with 250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307 not found: ID does not exist" containerID="250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.336365 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307"} err="failed to get container status \"250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\": rpc error: code = NotFound desc = could not find container \"250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307\": container with ID starting with 250a34062c680cecaa28554a71a782da6fa1c3554900e8b4f2fa6c093f98e307 not found: ID does not exist" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.336380 4867 scope.go:117] "RemoveContainer" containerID="92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18" Feb 14 04:19:51 crc kubenswrapper[4867]: E0214 04:19:51.336607 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\": container with ID starting with 92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18 not found: ID does not exist" containerID="92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.336630 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18"} err="failed to get container status \"92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\": rpc error: code = NotFound desc = could not find container \"92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18\": container with ID starting with 92014b6ab3e1d7c8631fb2a2fa44b60586bb67157c43e3528a7644584fd25b18 not found: ID does not exist" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.336653 4867 scope.go:117] "RemoveContainer" containerID="669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6" Feb 14 04:19:51 crc kubenswrapper[4867]: E0214 04:19:51.336863 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\": container with ID starting with 669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6 not found: ID does not exist" containerID="669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.336885 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6"} err="failed to get container status \"669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\": rpc error: code = NotFound desc = could not find container \"669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6\": container with ID starting with 669cee3fb4ea5c0247e6aa92962377d3152fc79d9022e03689b7c017f857e6e6 not found: ID does not exist" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.336899 4867 scope.go:117] "RemoveContainer" containerID="e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e" Feb 14 04:19:51 crc kubenswrapper[4867]: E0214 04:19:51.337113 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\": container with ID starting with e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e not found: ID does not exist" containerID="e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.337135 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e"} err="failed to get container status \"e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\": rpc error: code = NotFound desc = could not find container \"e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e\": container with ID starting with e713ec2c9e59ec516f5c2241a0e87501f4e83e05d6dadd3c54cd7f1cf11f7d1e not found: ID does not exist" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.337149 4867 scope.go:117] "RemoveContainer" containerID="cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288" Feb 14 04:19:51 crc kubenswrapper[4867]: E0214 04:19:51.337358 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\": container with ID starting with cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288 not found: ID does not exist" containerID="cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288" Feb 14 04:19:51 crc kubenswrapper[4867]: I0214 04:19:51.337385 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288"} err="failed to get container status \"cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\": rpc error: code = NotFound desc = could not find container \"cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288\": container with ID starting with cc78efb328b501eac4cb3e248e5cc2652a1e923165413495b829497d9caa6288 not found: ID does not exist" Feb 14 04:19:52 crc kubenswrapper[4867]: I0214 04:19:52.020280 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" event={"ID":"6b78a78d-1660-47ec-a3c6-b826a798ef37","Type":"ContainerStarted","Data":"e39d145a2223f01efcf78f1860ba0edcf7bacd85f3acadcbbcfc8a2538a351a3"} Feb 14 04:19:52 crc kubenswrapper[4867]: I0214 04:19:52.020785 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" event={"ID":"6b78a78d-1660-47ec-a3c6-b826a798ef37","Type":"ContainerStarted","Data":"c46d5153b8539c36d82f02078b671594fe0c7c666a32d5cb85c9333db720bd6f"} Feb 14 04:19:52 crc kubenswrapper[4867]: I0214 04:19:52.020800 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" event={"ID":"6b78a78d-1660-47ec-a3c6-b826a798ef37","Type":"ContainerStarted","Data":"e54daa57901cc4b680b7f5f69f638890e05b8c6b7a460d696bac8aae447d5ec5"} Feb 14 04:19:52 crc kubenswrapper[4867]: I0214 04:19:52.020820 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" event={"ID":"6b78a78d-1660-47ec-a3c6-b826a798ef37","Type":"ContainerStarted","Data":"28b23ae4549cd208242d8e0a94c5d121764cc9af9566976e376ca094c0931a05"} Feb 14 04:19:52 crc kubenswrapper[4867]: I0214 04:19:52.020831 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" event={"ID":"6b78a78d-1660-47ec-a3c6-b826a798ef37","Type":"ContainerStarted","Data":"2d0e373d70ea57ce93d860ad33bb4a9ad3c10f6ac08fe79c5a4f8ecd58b4104d"} Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.006850 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34391a30-5865-46e9-af5f-705cc3b11fba" path="/var/lib/kubelet/pods/34391a30-5865-46e9-af5f-705cc3b11fba/volumes" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.030608 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" event={"ID":"6b78a78d-1660-47ec-a3c6-b826a798ef37","Type":"ContainerStarted","Data":"4053b5fc4e7c5d07a1bbdd744111208b45c99da6277ba21afde0062a749c4888"} Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.322119 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr"] Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.323631 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.326978 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.327167 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.333382 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-9jn2g" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.406594 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp2q6\" (UniqueName: \"kubernetes.io/projected/987816d4-f9a4-47da-983c-317f9a3f4d86-kube-api-access-vp2q6\") pod \"obo-prometheus-operator-68bc856cb9-vwlcr\" (UID: \"987816d4-f9a4-47da-983c-317f9a3f4d86\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.480567 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr"] Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.482164 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.488970 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-5jt8v" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.489278 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.499909 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj"] Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.501042 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.507940 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp2q6\" (UniqueName: \"kubernetes.io/projected/987816d4-f9a4-47da-983c-317f9a3f4d86-kube-api-access-vp2q6\") pod \"obo-prometheus-operator-68bc856cb9-vwlcr\" (UID: \"987816d4-f9a4-47da-983c-317f9a3f4d86\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.535146 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp2q6\" (UniqueName: \"kubernetes.io/projected/987816d4-f9a4-47da-983c-317f9a3f4d86-kube-api-access-vp2q6\") pod \"obo-prometheus-operator-68bc856cb9-vwlcr\" (UID: \"987816d4-f9a4-47da-983c-317f9a3f4d86\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.610739 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5ecc414b-6bac-4b24-99c5-e2d1fb67f314-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr\" (UID: \"5ecc414b-6bac-4b24-99c5-e2d1fb67f314\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.611176 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj\" (UID: \"8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.611352 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5ecc414b-6bac-4b24-99c5-e2d1fb67f314-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr\" (UID: \"5ecc414b-6bac-4b24-99c5-e2d1fb67f314\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.611401 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj\" (UID: \"8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.640101 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.686996 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-kv4j7"] Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.688594 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.695959 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.696690 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-m6hgx" Feb 14 04:19:53 crc kubenswrapper[4867]: E0214 04:19:53.700939 4867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators_987816d4-f9a4-47da-983c-317f9a3f4d86_0(8670ebfb80def73022e9ac71dc0d989292b6b36e0f026f519130146e71180d72): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:19:53 crc kubenswrapper[4867]: E0214 04:19:53.701060 4867 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators_987816d4-f9a4-47da-983c-317f9a3f4d86_0(8670ebfb80def73022e9ac71dc0d989292b6b36e0f026f519130146e71180d72): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:19:53 crc kubenswrapper[4867]: E0214 04:19:53.701135 4867 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators_987816d4-f9a4-47da-983c-317f9a3f4d86_0(8670ebfb80def73022e9ac71dc0d989292b6b36e0f026f519130146e71180d72): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:19:53 crc kubenswrapper[4867]: E0214 04:19:53.701233 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators(987816d4-f9a4-47da-983c-317f9a3f4d86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators(987816d4-f9a4-47da-983c-317f9a3f4d86)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators_987816d4-f9a4-47da-983c-317f9a3f4d86_0(8670ebfb80def73022e9ac71dc0d989292b6b36e0f026f519130146e71180d72): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" podUID="987816d4-f9a4-47da-983c-317f9a3f4d86" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.712965 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5ecc414b-6bac-4b24-99c5-e2d1fb67f314-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr\" (UID: \"5ecc414b-6bac-4b24-99c5-e2d1fb67f314\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.713198 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj\" (UID: \"8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.713299 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsgnl\" (UniqueName: \"kubernetes.io/projected/94f47db9-4437-4b3e-aee5-f6f65e715e62-kube-api-access-fsgnl\") pod \"observability-operator-59bdc8b94-kv4j7\" (UID: \"94f47db9-4437-4b3e-aee5-f6f65e715e62\") " pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.713391 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5ecc414b-6bac-4b24-99c5-e2d1fb67f314-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr\" (UID: \"5ecc414b-6bac-4b24-99c5-e2d1fb67f314\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.713469 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj\" (UID: \"8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.713576 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/94f47db9-4437-4b3e-aee5-f6f65e715e62-observability-operator-tls\") pod \"observability-operator-59bdc8b94-kv4j7\" (UID: \"94f47db9-4437-4b3e-aee5-f6f65e715e62\") " pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.723097 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj\" (UID: \"8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.723097 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5ecc414b-6bac-4b24-99c5-e2d1fb67f314-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr\" (UID: \"5ecc414b-6bac-4b24-99c5-e2d1fb67f314\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.723910 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj\" (UID: \"8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.727616 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5ecc414b-6bac-4b24-99c5-e2d1fb67f314-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr\" (UID: \"5ecc414b-6bac-4b24-99c5-e2d1fb67f314\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.798085 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.814722 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsgnl\" (UniqueName: \"kubernetes.io/projected/94f47db9-4437-4b3e-aee5-f6f65e715e62-kube-api-access-fsgnl\") pod \"observability-operator-59bdc8b94-kv4j7\" (UID: \"94f47db9-4437-4b3e-aee5-f6f65e715e62\") " pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.814785 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/94f47db9-4437-4b3e-aee5-f6f65e715e62-observability-operator-tls\") pod \"observability-operator-59bdc8b94-kv4j7\" (UID: \"94f47db9-4437-4b3e-aee5-f6f65e715e62\") " pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.818748 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.822169 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/94f47db9-4437-4b3e-aee5-f6f65e715e62-observability-operator-tls\") pod \"observability-operator-59bdc8b94-kv4j7\" (UID: \"94f47db9-4437-4b3e-aee5-f6f65e715e62\") " pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:19:53 crc kubenswrapper[4867]: E0214 04:19:53.834013 4867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators_5ecc414b-6bac-4b24-99c5-e2d1fb67f314_0(42b0ab45a98bb0c5369ed3eec4f3619b039a6a34acb087535ffe1c2a8171bd8e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:19:53 crc kubenswrapper[4867]: E0214 04:19:53.834160 4867 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators_5ecc414b-6bac-4b24-99c5-e2d1fb67f314_0(42b0ab45a98bb0c5369ed3eec4f3619b039a6a34acb087535ffe1c2a8171bd8e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:19:53 crc kubenswrapper[4867]: E0214 04:19:53.834236 4867 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators_5ecc414b-6bac-4b24-99c5-e2d1fb67f314_0(42b0ab45a98bb0c5369ed3eec4f3619b039a6a34acb087535ffe1c2a8171bd8e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:19:53 crc kubenswrapper[4867]: E0214 04:19:53.834336 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators(5ecc414b-6bac-4b24-99c5-e2d1fb67f314)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators(5ecc414b-6bac-4b24-99c5-e2d1fb67f314)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators_5ecc414b-6bac-4b24-99c5-e2d1fb67f314_0(42b0ab45a98bb0c5369ed3eec4f3619b039a6a34acb087535ffe1c2a8171bd8e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" podUID="5ecc414b-6bac-4b24-99c5-e2d1fb67f314" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.834580 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsgnl\" (UniqueName: \"kubernetes.io/projected/94f47db9-4437-4b3e-aee5-f6f65e715e62-kube-api-access-fsgnl\") pod \"observability-operator-59bdc8b94-kv4j7\" (UID: \"94f47db9-4437-4b3e-aee5-f6f65e715e62\") " pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:19:53 crc kubenswrapper[4867]: E0214 04:19:53.855430 4867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators_8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06_0(fe1c9f4f2a49c49402576aa486145a6b9b778095b39ec24b789372223cc5e663): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:19:53 crc kubenswrapper[4867]: E0214 04:19:53.855498 4867 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators_8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06_0(fe1c9f4f2a49c49402576aa486145a6b9b778095b39ec24b789372223cc5e663): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:19:53 crc kubenswrapper[4867]: E0214 04:19:53.855536 4867 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators_8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06_0(fe1c9f4f2a49c49402576aa486145a6b9b778095b39ec24b789372223cc5e663): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:19:53 crc kubenswrapper[4867]: E0214 04:19:53.855580 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators(8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators(8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators_8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06_0(fe1c9f4f2a49c49402576aa486145a6b9b778095b39ec24b789372223cc5e663): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" podUID="8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.891376 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-7qfh9"] Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.892140 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.895913 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-dp82x" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.915573 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85ck8\" (UniqueName: \"kubernetes.io/projected/31f03187-50f6-4015-afdc-422455a63006-kube-api-access-85ck8\") pod \"perses-operator-5bf474d74f-7qfh9\" (UID: \"31f03187-50f6-4015-afdc-422455a63006\") " pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:19:53 crc kubenswrapper[4867]: I0214 04:19:53.915624 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/31f03187-50f6-4015-afdc-422455a63006-openshift-service-ca\") pod \"perses-operator-5bf474d74f-7qfh9\" (UID: \"31f03187-50f6-4015-afdc-422455a63006\") " pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:19:54 crc kubenswrapper[4867]: I0214 04:19:54.016764 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85ck8\" (UniqueName: \"kubernetes.io/projected/31f03187-50f6-4015-afdc-422455a63006-kube-api-access-85ck8\") pod \"perses-operator-5bf474d74f-7qfh9\" (UID: \"31f03187-50f6-4015-afdc-422455a63006\") " pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:19:54 crc kubenswrapper[4867]: I0214 04:19:54.016989 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/31f03187-50f6-4015-afdc-422455a63006-openshift-service-ca\") pod \"perses-operator-5bf474d74f-7qfh9\" (UID: \"31f03187-50f6-4015-afdc-422455a63006\") " pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:19:54 crc kubenswrapper[4867]: I0214 04:19:54.017904 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/31f03187-50f6-4015-afdc-422455a63006-openshift-service-ca\") pod \"perses-operator-5bf474d74f-7qfh9\" (UID: \"31f03187-50f6-4015-afdc-422455a63006\") " pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:19:54 crc kubenswrapper[4867]: I0214 04:19:54.052835 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85ck8\" (UniqueName: \"kubernetes.io/projected/31f03187-50f6-4015-afdc-422455a63006-kube-api-access-85ck8\") pod \"perses-operator-5bf474d74f-7qfh9\" (UID: \"31f03187-50f6-4015-afdc-422455a63006\") " pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:19:54 crc kubenswrapper[4867]: I0214 04:19:54.061761 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:19:54 crc kubenswrapper[4867]: E0214 04:19:54.085652 4867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kv4j7_openshift-operators_94f47db9-4437-4b3e-aee5-f6f65e715e62_0(4b082bee08d8df616d15b9598d3c489d24d41f7cdf77bb00ac47b37923c2e1eb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:19:54 crc kubenswrapper[4867]: E0214 04:19:54.085835 4867 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kv4j7_openshift-operators_94f47db9-4437-4b3e-aee5-f6f65e715e62_0(4b082bee08d8df616d15b9598d3c489d24d41f7cdf77bb00ac47b37923c2e1eb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:19:54 crc kubenswrapper[4867]: E0214 04:19:54.085909 4867 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kv4j7_openshift-operators_94f47db9-4437-4b3e-aee5-f6f65e715e62_0(4b082bee08d8df616d15b9598d3c489d24d41f7cdf77bb00ac47b37923c2e1eb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:19:54 crc kubenswrapper[4867]: E0214 04:19:54.085995 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-kv4j7_openshift-operators(94f47db9-4437-4b3e-aee5-f6f65e715e62)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-kv4j7_openshift-operators(94f47db9-4437-4b3e-aee5-f6f65e715e62)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kv4j7_openshift-operators_94f47db9-4437-4b3e-aee5-f6f65e715e62_0(4b082bee08d8df616d15b9598d3c489d24d41f7cdf77bb00ac47b37923c2e1eb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" podUID="94f47db9-4437-4b3e-aee5-f6f65e715e62" Feb 14 04:19:54 crc kubenswrapper[4867]: I0214 04:19:54.211994 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:19:54 crc kubenswrapper[4867]: E0214 04:19:54.248480 4867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7qfh9_openshift-operators_31f03187-50f6-4015-afdc-422455a63006_0(30752b41e66f722175658a9a417aaa79c79e317ef4960e2ce3a798adad548b27): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:19:54 crc kubenswrapper[4867]: E0214 04:19:54.248567 4867 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7qfh9_openshift-operators_31f03187-50f6-4015-afdc-422455a63006_0(30752b41e66f722175658a9a417aaa79c79e317ef4960e2ce3a798adad548b27): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:19:54 crc kubenswrapper[4867]: E0214 04:19:54.248589 4867 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7qfh9_openshift-operators_31f03187-50f6-4015-afdc-422455a63006_0(30752b41e66f722175658a9a417aaa79c79e317ef4960e2ce3a798adad548b27): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:19:54 crc kubenswrapper[4867]: E0214 04:19:54.248632 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-7qfh9_openshift-operators(31f03187-50f6-4015-afdc-422455a63006)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-7qfh9_openshift-operators(31f03187-50f6-4015-afdc-422455a63006)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7qfh9_openshift-operators_31f03187-50f6-4015-afdc-422455a63006_0(30752b41e66f722175658a9a417aaa79c79e317ef4960e2ce3a798adad548b27): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" podUID="31f03187-50f6-4015-afdc-422455a63006" Feb 14 04:19:55 crc kubenswrapper[4867]: I0214 04:19:55.045841 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" event={"ID":"6b78a78d-1660-47ec-a3c6-b826a798ef37","Type":"ContainerStarted","Data":"e4b291afb2db520f18f44963148c2bf71665ebbd19355fb7c116485db5332b52"} Feb 14 04:19:57 crc kubenswrapper[4867]: I0214 04:19:57.064147 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" event={"ID":"6b78a78d-1660-47ec-a3c6-b826a798ef37","Type":"ContainerStarted","Data":"02243c13ceb5b936438b09dd590ac5f5a805cbc8f6fdb50df02d30457d02d0e6"} Feb 14 04:19:57 crc kubenswrapper[4867]: I0214 04:19:57.066127 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:57 crc kubenswrapper[4867]: I0214 04:19:57.066154 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:57 crc kubenswrapper[4867]: I0214 04:19:57.110621 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" podStartSLOduration=7.110603088 podStartE2EDuration="7.110603088s" podCreationTimestamp="2026-02-14 04:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:19:57.106278805 +0000 UTC m=+629.187216119" watchObservedRunningTime="2026-02-14 04:19:57.110603088 +0000 UTC m=+629.191540402" Feb 14 04:19:57 crc kubenswrapper[4867]: I0214 04:19:57.118693 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.070534 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.112072 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.193502 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr"] Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.193733 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.194464 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.196982 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr"] Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.197074 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.197361 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.201794 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj"] Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.201930 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.202463 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.216234 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-kv4j7"] Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.216366 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.216880 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.263694 4867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators_5ecc414b-6bac-4b24-99c5-e2d1fb67f314_0(710e9eb82aac9befa3a5293c944632b703e88c2f0ed67b30b9b9efcde2540209): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.263760 4867 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators_5ecc414b-6bac-4b24-99c5-e2d1fb67f314_0(710e9eb82aac9befa3a5293c944632b703e88c2f0ed67b30b9b9efcde2540209): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.263789 4867 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators_5ecc414b-6bac-4b24-99c5-e2d1fb67f314_0(710e9eb82aac9befa3a5293c944632b703e88c2f0ed67b30b9b9efcde2540209): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.263837 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators(5ecc414b-6bac-4b24-99c5-e2d1fb67f314)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators(5ecc414b-6bac-4b24-99c5-e2d1fb67f314)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators_5ecc414b-6bac-4b24-99c5-e2d1fb67f314_0(710e9eb82aac9befa3a5293c944632b703e88c2f0ed67b30b9b9efcde2540209): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" podUID="5ecc414b-6bac-4b24-99c5-e2d1fb67f314" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.299791 4867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators_8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06_0(8b72fb8c0c74f29bd828fa07dc48b0cd8902d95bcd12a840ce9e750d6679024e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.299897 4867 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators_8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06_0(8b72fb8c0c74f29bd828fa07dc48b0cd8902d95bcd12a840ce9e750d6679024e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.299932 4867 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators_8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06_0(8b72fb8c0c74f29bd828fa07dc48b0cd8902d95bcd12a840ce9e750d6679024e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.299987 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators(8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators(8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators_8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06_0(8b72fb8c0c74f29bd828fa07dc48b0cd8902d95bcd12a840ce9e750d6679024e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" podUID="8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.299800 4867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kv4j7_openshift-operators_94f47db9-4437-4b3e-aee5-f6f65e715e62_0(cc9a9d0ff2448e861e04f466dd2c904d6bf08c8e212891b521c6e1a93466684a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.300131 4867 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kv4j7_openshift-operators_94f47db9-4437-4b3e-aee5-f6f65e715e62_0(cc9a9d0ff2448e861e04f466dd2c904d6bf08c8e212891b521c6e1a93466684a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.300161 4867 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kv4j7_openshift-operators_94f47db9-4437-4b3e-aee5-f6f65e715e62_0(cc9a9d0ff2448e861e04f466dd2c904d6bf08c8e212891b521c6e1a93466684a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.300210 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-kv4j7_openshift-operators(94f47db9-4437-4b3e-aee5-f6f65e715e62)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-kv4j7_openshift-operators(94f47db9-4437-4b3e-aee5-f6f65e715e62)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kv4j7_openshift-operators_94f47db9-4437-4b3e-aee5-f6f65e715e62_0(cc9a9d0ff2448e861e04f466dd2c904d6bf08c8e212891b521c6e1a93466684a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" podUID="94f47db9-4437-4b3e-aee5-f6f65e715e62" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.305768 4867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators_987816d4-f9a4-47da-983c-317f9a3f4d86_0(cffb5a3566e011e87476d91749d28cb5d7174605988597ae1457fc67e06568cf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.305843 4867 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators_987816d4-f9a4-47da-983c-317f9a3f4d86_0(cffb5a3566e011e87476d91749d28cb5d7174605988597ae1457fc67e06568cf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.305870 4867 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators_987816d4-f9a4-47da-983c-317f9a3f4d86_0(cffb5a3566e011e87476d91749d28cb5d7174605988597ae1457fc67e06568cf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.305921 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators(987816d4-f9a4-47da-983c-317f9a3f4d86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators(987816d4-f9a4-47da-983c-317f9a3f4d86)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators_987816d4-f9a4-47da-983c-317f9a3f4d86_0(cffb5a3566e011e87476d91749d28cb5d7174605988597ae1457fc67e06568cf): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" podUID="987816d4-f9a4-47da-983c-317f9a3f4d86" Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.339574 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-7qfh9"] Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.339752 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:19:58 crc kubenswrapper[4867]: I0214 04:19:58.340373 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.397723 4867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7qfh9_openshift-operators_31f03187-50f6-4015-afdc-422455a63006_0(1586869cea8578ccfe7db68de413be8426fda91b15912bcf3aedfb7fb5b209d6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.399380 4867 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7qfh9_openshift-operators_31f03187-50f6-4015-afdc-422455a63006_0(1586869cea8578ccfe7db68de413be8426fda91b15912bcf3aedfb7fb5b209d6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.399480 4867 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7qfh9_openshift-operators_31f03187-50f6-4015-afdc-422455a63006_0(1586869cea8578ccfe7db68de413be8426fda91b15912bcf3aedfb7fb5b209d6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:19:58 crc kubenswrapper[4867]: E0214 04:19:58.399650 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-7qfh9_openshift-operators(31f03187-50f6-4015-afdc-422455a63006)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-7qfh9_openshift-operators(31f03187-50f6-4015-afdc-422455a63006)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7qfh9_openshift-operators_31f03187-50f6-4015-afdc-422455a63006_0(1586869cea8578ccfe7db68de413be8426fda91b15912bcf3aedfb7fb5b209d6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" podUID="31f03187-50f6-4015-afdc-422455a63006" Feb 14 04:20:03 crc kubenswrapper[4867]: I0214 04:20:03.997460 4867 scope.go:117] "RemoveContainer" containerID="b07a230a65d345e7f64ecb41b905a120a6174dc5229f73c67b086608b27b5a72" Feb 14 04:20:04 crc kubenswrapper[4867]: E0214 04:20:03.998376 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-fl729_openshift-multus(fb77d03e-6ead-48b5-a96a-db4cbd540192)\"" pod="openshift-multus/multus-fl729" podUID="fb77d03e-6ead-48b5-a96a-db4cbd540192" Feb 14 04:20:08 crc kubenswrapper[4867]: I0214 04:20:08.996362 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:20:08 crc kubenswrapper[4867]: I0214 04:20:08.996400 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:20:08 crc kubenswrapper[4867]: I0214 04:20:08.996530 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:20:09 crc kubenswrapper[4867]: I0214 04:20:09.003895 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:20:09 crc kubenswrapper[4867]: I0214 04:20:09.004216 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:20:09 crc kubenswrapper[4867]: I0214 04:20:09.004939 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:20:09 crc kubenswrapper[4867]: E0214 04:20:09.074907 4867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7qfh9_openshift-operators_31f03187-50f6-4015-afdc-422455a63006_0(dbf23939473170c04f7c49bdb95bdb503375ed57c7166938eb037a7d179e262c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:20:09 crc kubenswrapper[4867]: E0214 04:20:09.074997 4867 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7qfh9_openshift-operators_31f03187-50f6-4015-afdc-422455a63006_0(dbf23939473170c04f7c49bdb95bdb503375ed57c7166938eb037a7d179e262c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:20:09 crc kubenswrapper[4867]: E0214 04:20:09.075026 4867 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7qfh9_openshift-operators_31f03187-50f6-4015-afdc-422455a63006_0(dbf23939473170c04f7c49bdb95bdb503375ed57c7166938eb037a7d179e262c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:20:09 crc kubenswrapper[4867]: E0214 04:20:09.075079 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-7qfh9_openshift-operators(31f03187-50f6-4015-afdc-422455a63006)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-7qfh9_openshift-operators(31f03187-50f6-4015-afdc-422455a63006)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-7qfh9_openshift-operators_31f03187-50f6-4015-afdc-422455a63006_0(dbf23939473170c04f7c49bdb95bdb503375ed57c7166938eb037a7d179e262c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" podUID="31f03187-50f6-4015-afdc-422455a63006" Feb 14 04:20:09 crc kubenswrapper[4867]: E0214 04:20:09.078570 4867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators_8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06_0(5c109dbd675ce1f37912ce01fc12701c6b0a745d3d79ad504bde775fd2f2811c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:20:09 crc kubenswrapper[4867]: E0214 04:20:09.078691 4867 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators_8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06_0(5c109dbd675ce1f37912ce01fc12701c6b0a745d3d79ad504bde775fd2f2811c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:20:09 crc kubenswrapper[4867]: E0214 04:20:09.078732 4867 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators_8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06_0(5c109dbd675ce1f37912ce01fc12701c6b0a745d3d79ad504bde775fd2f2811c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:20:09 crc kubenswrapper[4867]: E0214 04:20:09.078828 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators(8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators(8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_openshift-operators_8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06_0(5c109dbd675ce1f37912ce01fc12701c6b0a745d3d79ad504bde775fd2f2811c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" podUID="8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06" Feb 14 04:20:09 crc kubenswrapper[4867]: E0214 04:20:09.105535 4867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kv4j7_openshift-operators_94f47db9-4437-4b3e-aee5-f6f65e715e62_0(5fa4df8545a62fa65b3158364646356fd5dc9c34c1116b5c69b7111570dff521): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:20:09 crc kubenswrapper[4867]: E0214 04:20:09.105635 4867 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kv4j7_openshift-operators_94f47db9-4437-4b3e-aee5-f6f65e715e62_0(5fa4df8545a62fa65b3158364646356fd5dc9c34c1116b5c69b7111570dff521): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:20:09 crc kubenswrapper[4867]: E0214 04:20:09.105666 4867 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kv4j7_openshift-operators_94f47db9-4437-4b3e-aee5-f6f65e715e62_0(5fa4df8545a62fa65b3158364646356fd5dc9c34c1116b5c69b7111570dff521): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:20:09 crc kubenswrapper[4867]: E0214 04:20:09.105744 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-kv4j7_openshift-operators(94f47db9-4437-4b3e-aee5-f6f65e715e62)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-kv4j7_openshift-operators(94f47db9-4437-4b3e-aee5-f6f65e715e62)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-kv4j7_openshift-operators_94f47db9-4437-4b3e-aee5-f6f65e715e62_0(5fa4df8545a62fa65b3158364646356fd5dc9c34c1116b5c69b7111570dff521): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" podUID="94f47db9-4437-4b3e-aee5-f6f65e715e62" Feb 14 04:20:09 crc kubenswrapper[4867]: I0214 04:20:09.997150 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:20:09 crc kubenswrapper[4867]: I0214 04:20:09.998314 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:20:10 crc kubenswrapper[4867]: E0214 04:20:10.020842 4867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators_987816d4-f9a4-47da-983c-317f9a3f4d86_0(bc12c67cdb43ba6d20249fb5fb7dc6c2c75b92e921295a1e4507ae9d8d31116d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:20:10 crc kubenswrapper[4867]: E0214 04:20:10.020963 4867 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators_987816d4-f9a4-47da-983c-317f9a3f4d86_0(bc12c67cdb43ba6d20249fb5fb7dc6c2c75b92e921295a1e4507ae9d8d31116d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:20:10 crc kubenswrapper[4867]: E0214 04:20:10.021002 4867 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators_987816d4-f9a4-47da-983c-317f9a3f4d86_0(bc12c67cdb43ba6d20249fb5fb7dc6c2c75b92e921295a1e4507ae9d8d31116d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:20:10 crc kubenswrapper[4867]: E0214 04:20:10.021084 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators(987816d4-f9a4-47da-983c-317f9a3f4d86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators(987816d4-f9a4-47da-983c-317f9a3f4d86)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-vwlcr_openshift-operators_987816d4-f9a4-47da-983c-317f9a3f4d86_0(bc12c67cdb43ba6d20249fb5fb7dc6c2c75b92e921295a1e4507ae9d8d31116d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" podUID="987816d4-f9a4-47da-983c-317f9a3f4d86" Feb 14 04:20:13 crc kubenswrapper[4867]: I0214 04:20:13.997027 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:20:13 crc kubenswrapper[4867]: I0214 04:20:13.997821 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:20:14 crc kubenswrapper[4867]: E0214 04:20:14.035121 4867 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators_5ecc414b-6bac-4b24-99c5-e2d1fb67f314_0(85a457d248997e9b41e579f4ad1b8732f8b1ac35f12cb9c5b5ae8fa9740c6455): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 14 04:20:14 crc kubenswrapper[4867]: E0214 04:20:14.035185 4867 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators_5ecc414b-6bac-4b24-99c5-e2d1fb67f314_0(85a457d248997e9b41e579f4ad1b8732f8b1ac35f12cb9c5b5ae8fa9740c6455): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:20:14 crc kubenswrapper[4867]: E0214 04:20:14.035207 4867 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators_5ecc414b-6bac-4b24-99c5-e2d1fb67f314_0(85a457d248997e9b41e579f4ad1b8732f8b1ac35f12cb9c5b5ae8fa9740c6455): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:20:14 crc kubenswrapper[4867]: E0214 04:20:14.035261 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators(5ecc414b-6bac-4b24-99c5-e2d1fb67f314)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators(5ecc414b-6bac-4b24-99c5-e2d1fb67f314)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_openshift-operators_5ecc414b-6bac-4b24-99c5-e2d1fb67f314_0(85a457d248997e9b41e579f4ad1b8732f8b1ac35f12cb9c5b5ae8fa9740c6455): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" podUID="5ecc414b-6bac-4b24-99c5-e2d1fb67f314" Feb 14 04:20:15 crc kubenswrapper[4867]: I0214 04:20:15.997833 4867 scope.go:117] "RemoveContainer" containerID="b07a230a65d345e7f64ecb41b905a120a6174dc5229f73c67b086608b27b5a72" Feb 14 04:20:16 crc kubenswrapper[4867]: I0214 04:20:16.190810 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fl729_fb77d03e-6ead-48b5-a96a-db4cbd540192/kube-multus/2.log" Feb 14 04:20:16 crc kubenswrapper[4867]: I0214 04:20:16.191179 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fl729" event={"ID":"fb77d03e-6ead-48b5-a96a-db4cbd540192","Type":"ContainerStarted","Data":"64b5a084853a3d1ad08a41ce2324a38f7ea9e21f0b5662f7fbd7a03aa0fb2e2b"} Feb 14 04:20:20 crc kubenswrapper[4867]: I0214 04:20:20.560294 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c58t7" Feb 14 04:20:20 crc kubenswrapper[4867]: I0214 04:20:20.996801 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:20:20 crc kubenswrapper[4867]: I0214 04:20:20.997772 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" Feb 14 04:20:21 crc kubenswrapper[4867]: I0214 04:20:21.430166 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr"] Feb 14 04:20:21 crc kubenswrapper[4867]: I0214 04:20:21.997217 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:20:21 crc kubenswrapper[4867]: I0214 04:20:21.998017 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:20:22 crc kubenswrapper[4867]: I0214 04:20:22.253653 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" event={"ID":"987816d4-f9a4-47da-983c-317f9a3f4d86","Type":"ContainerStarted","Data":"26824e7cea2dae02dc3534ea1722997deea1091e4a48e6085d54be1784dce4e4"} Feb 14 04:20:22 crc kubenswrapper[4867]: W0214 04:20:22.566010 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94f47db9_4437_4b3e_aee5_f6f65e715e62.slice/crio-7aa0fd1cf526f0e35cec383083999cfdfd69adcfa5a134fc3ad39677eafce452 WatchSource:0}: Error finding container 7aa0fd1cf526f0e35cec383083999cfdfd69adcfa5a134fc3ad39677eafce452: Status 404 returned error can't find the container with id 7aa0fd1cf526f0e35cec383083999cfdfd69adcfa5a134fc3ad39677eafce452 Feb 14 04:20:22 crc kubenswrapper[4867]: I0214 04:20:22.577342 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-kv4j7"] Feb 14 04:20:22 crc kubenswrapper[4867]: I0214 04:20:22.997213 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:20:22 crc kubenswrapper[4867]: I0214 04:20:22.998234 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:20:23 crc kubenswrapper[4867]: I0214 04:20:23.260136 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" event={"ID":"94f47db9-4437-4b3e-aee5-f6f65e715e62","Type":"ContainerStarted","Data":"7aa0fd1cf526f0e35cec383083999cfdfd69adcfa5a134fc3ad39677eafce452"} Feb 14 04:20:23 crc kubenswrapper[4867]: I0214 04:20:23.457781 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-7qfh9"] Feb 14 04:20:23 crc kubenswrapper[4867]: W0214 04:20:23.464988 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31f03187_50f6_4015_afdc_422455a63006.slice/crio-f8c957039d7c9e3fd7c41aff90dbdd42e7c13e6b0758f453407b6fbf0c679dbd WatchSource:0}: Error finding container f8c957039d7c9e3fd7c41aff90dbdd42e7c13e6b0758f453407b6fbf0c679dbd: Status 404 returned error can't find the container with id f8c957039d7c9e3fd7c41aff90dbdd42e7c13e6b0758f453407b6fbf0c679dbd Feb 14 04:20:23 crc kubenswrapper[4867]: I0214 04:20:23.996926 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:20:23 crc kubenswrapper[4867]: I0214 04:20:23.997389 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" Feb 14 04:20:24 crc kubenswrapper[4867]: I0214 04:20:24.271063 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" event={"ID":"31f03187-50f6-4015-afdc-422455a63006","Type":"ContainerStarted","Data":"f8c957039d7c9e3fd7c41aff90dbdd42e7c13e6b0758f453407b6fbf0c679dbd"} Feb 14 04:20:25 crc kubenswrapper[4867]: I0214 04:20:25.000666 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:20:25 crc kubenswrapper[4867]: I0214 04:20:25.001305 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" Feb 14 04:20:26 crc kubenswrapper[4867]: I0214 04:20:26.280307 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr"] Feb 14 04:20:26 crc kubenswrapper[4867]: W0214 04:20:26.297655 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ecc414b_6bac_4b24_99c5_e2d1fb67f314.slice/crio-71a8d1f58b33dd7770709fc95712d7a9634de0fef0abbf1b1c32a81748e38b40 WatchSource:0}: Error finding container 71a8d1f58b33dd7770709fc95712d7a9634de0fef0abbf1b1c32a81748e38b40: Status 404 returned error can't find the container with id 71a8d1f58b33dd7770709fc95712d7a9634de0fef0abbf1b1c32a81748e38b40 Feb 14 04:20:26 crc kubenswrapper[4867]: I0214 04:20:26.327280 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj"] Feb 14 04:20:26 crc kubenswrapper[4867]: W0214 04:20:26.334579 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c7f9ea9_2c5c_4e9c_97b2_02dd8a216d06.slice/crio-728f5e53a7eb5b51e067555e69ab17aa273bb22ad5b3107682ad45619622eda3 WatchSource:0}: Error finding container 728f5e53a7eb5b51e067555e69ab17aa273bb22ad5b3107682ad45619622eda3: Status 404 returned error can't find the container with id 728f5e53a7eb5b51e067555e69ab17aa273bb22ad5b3107682ad45619622eda3 Feb 14 04:20:27 crc kubenswrapper[4867]: I0214 04:20:27.308940 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" event={"ID":"987816d4-f9a4-47da-983c-317f9a3f4d86","Type":"ContainerStarted","Data":"cbf9077b91953cb3be07a2606e135c0b662c7ab6313046b9f2d7f2c4a5008722"} Feb 14 04:20:27 crc kubenswrapper[4867]: I0214 04:20:27.310754 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" event={"ID":"8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06","Type":"ContainerStarted","Data":"728f5e53a7eb5b51e067555e69ab17aa273bb22ad5b3107682ad45619622eda3"} Feb 14 04:20:27 crc kubenswrapper[4867]: I0214 04:20:27.312108 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" event={"ID":"5ecc414b-6bac-4b24-99c5-e2d1fb67f314","Type":"ContainerStarted","Data":"71a8d1f58b33dd7770709fc95712d7a9634de0fef0abbf1b1c32a81748e38b40"} Feb 14 04:20:29 crc kubenswrapper[4867]: I0214 04:20:29.046813 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-vwlcr" podStartSLOduration=31.423637257 podStartE2EDuration="36.046790021s" podCreationTimestamp="2026-02-14 04:19:53 +0000 UTC" firstStartedPulling="2026-02-14 04:20:21.449434826 +0000 UTC m=+653.530372160" lastFinishedPulling="2026-02-14 04:20:26.07258761 +0000 UTC m=+658.153524924" observedRunningTime="2026-02-14 04:20:27.342403718 +0000 UTC m=+659.423341342" watchObservedRunningTime="2026-02-14 04:20:29.046790021 +0000 UTC m=+661.127727335" Feb 14 04:20:31 crc kubenswrapper[4867]: I0214 04:20:31.358961 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" event={"ID":"31f03187-50f6-4015-afdc-422455a63006","Type":"ContainerStarted","Data":"bb4dc5e070beeca7160b299e1daf4e9ad29a2f879f5242e662fd2e946c49bc73"} Feb 14 04:20:31 crc kubenswrapper[4867]: I0214 04:20:31.359698 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:20:31 crc kubenswrapper[4867]: I0214 04:20:31.361089 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" event={"ID":"94f47db9-4437-4b3e-aee5-f6f65e715e62","Type":"ContainerStarted","Data":"71bc852131d72d72e543b26eddd8266b87750cb6e354e70eb54aa965c01b1cbc"} Feb 14 04:20:31 crc kubenswrapper[4867]: I0214 04:20:31.362117 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:20:31 crc kubenswrapper[4867]: I0214 04:20:31.367898 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" Feb 14 04:20:31 crc kubenswrapper[4867]: I0214 04:20:31.422444 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" podStartSLOduration=31.441289864 podStartE2EDuration="38.422402381s" podCreationTimestamp="2026-02-14 04:19:53 +0000 UTC" firstStartedPulling="2026-02-14 04:20:23.470563456 +0000 UTC m=+655.551500780" lastFinishedPulling="2026-02-14 04:20:30.451675983 +0000 UTC m=+662.532613297" observedRunningTime="2026-02-14 04:20:31.388783848 +0000 UTC m=+663.469721162" watchObservedRunningTime="2026-02-14 04:20:31.422402381 +0000 UTC m=+663.503339695" Feb 14 04:20:31 crc kubenswrapper[4867]: I0214 04:20:31.429863 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" podStartSLOduration=30.514869796 podStartE2EDuration="38.429828323s" podCreationTimestamp="2026-02-14 04:19:53 +0000 UTC" firstStartedPulling="2026-02-14 04:20:22.570590055 +0000 UTC m=+654.651527379" lastFinishedPulling="2026-02-14 04:20:30.485548592 +0000 UTC m=+662.566485906" observedRunningTime="2026-02-14 04:20:31.412768381 +0000 UTC m=+663.493705695" watchObservedRunningTime="2026-02-14 04:20:31.429828323 +0000 UTC m=+663.510765637" Feb 14 04:20:32 crc kubenswrapper[4867]: I0214 04:20:32.373056 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" event={"ID":"8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06","Type":"ContainerStarted","Data":"307febcdcdb9c846745d1413ad562cac6ff49caa8d054c359db0a38e9e44ac19"} Feb 14 04:20:32 crc kubenswrapper[4867]: I0214 04:20:32.376700 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" event={"ID":"5ecc414b-6bac-4b24-99c5-e2d1fb67f314","Type":"ContainerStarted","Data":"f8cd56a8e3e561bf25e54947fdc28176cacf29e6f79645dff743e3cef10f1b11"} Feb 14 04:20:32 crc kubenswrapper[4867]: I0214 04:20:32.405043 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj" podStartSLOduration=33.905051559 podStartE2EDuration="39.405026526s" podCreationTimestamp="2026-02-14 04:19:53 +0000 UTC" firstStartedPulling="2026-02-14 04:20:26.338252633 +0000 UTC m=+658.419189947" lastFinishedPulling="2026-02-14 04:20:31.83822759 +0000 UTC m=+663.919164914" observedRunningTime="2026-02-14 04:20:32.39940491 +0000 UTC m=+664.480342224" watchObservedRunningTime="2026-02-14 04:20:32.405026526 +0000 UTC m=+664.485963840" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.065991 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr" podStartSLOduration=42.544124155 podStartE2EDuration="48.065953939s" podCreationTimestamp="2026-02-14 04:19:53 +0000 UTC" firstStartedPulling="2026-02-14 04:20:26.302722161 +0000 UTC m=+658.383659475" lastFinishedPulling="2026-02-14 04:20:31.824551945 +0000 UTC m=+663.905489259" observedRunningTime="2026-02-14 04:20:32.445030954 +0000 UTC m=+664.525968268" watchObservedRunningTime="2026-02-14 04:20:41.065953939 +0000 UTC m=+673.146891253" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.070908 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-s4258"] Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.092835 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-s4258" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.099047 4867 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-sqjrv" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.099432 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.113885 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.119589 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-s4258"] Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.146134 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-gslqt"] Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.147132 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-gslqt" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.152204 4867 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-59r5z" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.156104 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-gslqt"] Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.160299 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-xlg4t"] Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.161113 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-xlg4t" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.162845 4867 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-prrbb" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.169922 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbwjr\" (UniqueName: \"kubernetes.io/projected/2224c85e-13be-400d-abf8-6b412d8c55ee-kube-api-access-gbwjr\") pod \"cert-manager-cainjector-cf98fcc89-s4258\" (UID: \"2224c85e-13be-400d-abf8-6b412d8c55ee\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-s4258" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.169997 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tp29\" (UniqueName: \"kubernetes.io/projected/1f305679-0f4d-440e-a053-7b3627eaae9c-kube-api-access-9tp29\") pod \"cert-manager-858654f9db-gslqt\" (UID: \"1f305679-0f4d-440e-a053-7b3627eaae9c\") " pod="cert-manager/cert-manager-858654f9db-gslqt" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.172168 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-xlg4t"] Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.272185 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4xrk\" (UniqueName: \"kubernetes.io/projected/34f53dfe-4707-4a5c-8745-c4ed944c6a6a-kube-api-access-n4xrk\") pod \"cert-manager-webhook-687f57d79b-xlg4t\" (UID: \"34f53dfe-4707-4a5c-8745-c4ed944c6a6a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-xlg4t" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.272364 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbwjr\" (UniqueName: \"kubernetes.io/projected/2224c85e-13be-400d-abf8-6b412d8c55ee-kube-api-access-gbwjr\") pod \"cert-manager-cainjector-cf98fcc89-s4258\" (UID: \"2224c85e-13be-400d-abf8-6b412d8c55ee\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-s4258" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.272422 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tp29\" (UniqueName: \"kubernetes.io/projected/1f305679-0f4d-440e-a053-7b3627eaae9c-kube-api-access-9tp29\") pod \"cert-manager-858654f9db-gslqt\" (UID: \"1f305679-0f4d-440e-a053-7b3627eaae9c\") " pod="cert-manager/cert-manager-858654f9db-gslqt" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.296417 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tp29\" (UniqueName: \"kubernetes.io/projected/1f305679-0f4d-440e-a053-7b3627eaae9c-kube-api-access-9tp29\") pod \"cert-manager-858654f9db-gslqt\" (UID: \"1f305679-0f4d-440e-a053-7b3627eaae9c\") " pod="cert-manager/cert-manager-858654f9db-gslqt" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.297077 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbwjr\" (UniqueName: \"kubernetes.io/projected/2224c85e-13be-400d-abf8-6b412d8c55ee-kube-api-access-gbwjr\") pod \"cert-manager-cainjector-cf98fcc89-s4258\" (UID: \"2224c85e-13be-400d-abf8-6b412d8c55ee\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-s4258" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.374982 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4xrk\" (UniqueName: \"kubernetes.io/projected/34f53dfe-4707-4a5c-8745-c4ed944c6a6a-kube-api-access-n4xrk\") pod \"cert-manager-webhook-687f57d79b-xlg4t\" (UID: \"34f53dfe-4707-4a5c-8745-c4ed944c6a6a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-xlg4t" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.396296 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4xrk\" (UniqueName: \"kubernetes.io/projected/34f53dfe-4707-4a5c-8745-c4ed944c6a6a-kube-api-access-n4xrk\") pod \"cert-manager-webhook-687f57d79b-xlg4t\" (UID: \"34f53dfe-4707-4a5c-8745-c4ed944c6a6a\") " pod="cert-manager/cert-manager-webhook-687f57d79b-xlg4t" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.436026 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-s4258" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.479128 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-gslqt" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.493973 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-xlg4t" Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.912989 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-s4258"] Feb 14 04:20:41 crc kubenswrapper[4867]: I0214 04:20:41.989377 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-xlg4t"] Feb 14 04:20:41 crc kubenswrapper[4867]: W0214 04:20:41.993588 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34f53dfe_4707_4a5c_8745_c4ed944c6a6a.slice/crio-cceea836a859096c2528ef16d5b9fc5fd75550478c027f54bed28b0c0f55ab75 WatchSource:0}: Error finding container cceea836a859096c2528ef16d5b9fc5fd75550478c027f54bed28b0c0f55ab75: Status 404 returned error can't find the container with id cceea836a859096c2528ef16d5b9fc5fd75550478c027f54bed28b0c0f55ab75 Feb 14 04:20:42 crc kubenswrapper[4867]: I0214 04:20:42.000215 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-gslqt"] Feb 14 04:20:42 crc kubenswrapper[4867]: W0214 04:20:42.001423 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f305679_0f4d_440e_a053_7b3627eaae9c.slice/crio-036c882d9b8b4bb344257333e24d37bd0a6818e678322c2c019421efa57ea5e0 WatchSource:0}: Error finding container 036c882d9b8b4bb344257333e24d37bd0a6818e678322c2c019421efa57ea5e0: Status 404 returned error can't find the container with id 036c882d9b8b4bb344257333e24d37bd0a6818e678322c2c019421efa57ea5e0 Feb 14 04:20:42 crc kubenswrapper[4867]: I0214 04:20:42.436912 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-s4258" event={"ID":"2224c85e-13be-400d-abf8-6b412d8c55ee","Type":"ContainerStarted","Data":"e94df9d68f3febd087b09eb3471b463abe9c0f2f8cd35bcbe0c1a1e6258073d1"} Feb 14 04:20:42 crc kubenswrapper[4867]: I0214 04:20:42.439739 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-xlg4t" event={"ID":"34f53dfe-4707-4a5c-8745-c4ed944c6a6a","Type":"ContainerStarted","Data":"cceea836a859096c2528ef16d5b9fc5fd75550478c027f54bed28b0c0f55ab75"} Feb 14 04:20:42 crc kubenswrapper[4867]: I0214 04:20:42.441293 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-gslqt" event={"ID":"1f305679-0f4d-440e-a053-7b3627eaae9c","Type":"ContainerStarted","Data":"036c882d9b8b4bb344257333e24d37bd0a6818e678322c2c019421efa57ea5e0"} Feb 14 04:20:44 crc kubenswrapper[4867]: I0214 04:20:44.214941 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" Feb 14 04:20:46 crc kubenswrapper[4867]: I0214 04:20:46.484071 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-gslqt" event={"ID":"1f305679-0f4d-440e-a053-7b3627eaae9c","Type":"ContainerStarted","Data":"e95342b2b45d020e15caa63292a465041cb8836c16cb24c6bb87232b7fd208eb"} Feb 14 04:20:46 crc kubenswrapper[4867]: I0214 04:20:46.499119 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-s4258" event={"ID":"2224c85e-13be-400d-abf8-6b412d8c55ee","Type":"ContainerStarted","Data":"c36b93a354b0b41cf00fdfe21e7201f2636dd31bad4023539836d40f441ab4b3"} Feb 14 04:20:46 crc kubenswrapper[4867]: I0214 04:20:46.501755 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-gslqt" podStartSLOduration=2.101220248 podStartE2EDuration="5.501743491s" podCreationTimestamp="2026-02-14 04:20:41 +0000 UTC" firstStartedPulling="2026-02-14 04:20:42.003539746 +0000 UTC m=+674.084477070" lastFinishedPulling="2026-02-14 04:20:45.404062999 +0000 UTC m=+677.485000313" observedRunningTime="2026-02-14 04:20:46.500135889 +0000 UTC m=+678.581073203" watchObservedRunningTime="2026-02-14 04:20:46.501743491 +0000 UTC m=+678.582680805" Feb 14 04:20:46 crc kubenswrapper[4867]: I0214 04:20:46.518478 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-s4258" podStartSLOduration=2.052952155 podStartE2EDuration="5.518460724s" podCreationTimestamp="2026-02-14 04:20:41 +0000 UTC" firstStartedPulling="2026-02-14 04:20:41.929612158 +0000 UTC m=+674.010549472" lastFinishedPulling="2026-02-14 04:20:45.395120727 +0000 UTC m=+677.476058041" observedRunningTime="2026-02-14 04:20:46.517654823 +0000 UTC m=+678.598592137" watchObservedRunningTime="2026-02-14 04:20:46.518460724 +0000 UTC m=+678.599398038" Feb 14 04:20:48 crc kubenswrapper[4867]: I0214 04:20:48.510945 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-xlg4t" event={"ID":"34f53dfe-4707-4a5c-8745-c4ed944c6a6a","Type":"ContainerStarted","Data":"43675d8fb1f5f7952da09285b8b5e7514389d4902a871cc81a26a7e94a924dce"} Feb 14 04:20:48 crc kubenswrapper[4867]: I0214 04:20:48.511639 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-xlg4t" Feb 14 04:20:48 crc kubenswrapper[4867]: I0214 04:20:48.527192 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-xlg4t" podStartSLOduration=1.7720035859999999 podStartE2EDuration="7.527176814s" podCreationTimestamp="2026-02-14 04:20:41 +0000 UTC" firstStartedPulling="2026-02-14 04:20:41.996269798 +0000 UTC m=+674.077207112" lastFinishedPulling="2026-02-14 04:20:47.751443026 +0000 UTC m=+679.832380340" observedRunningTime="2026-02-14 04:20:48.524555355 +0000 UTC m=+680.605492679" watchObservedRunningTime="2026-02-14 04:20:48.527176814 +0000 UTC m=+680.608114128" Feb 14 04:20:56 crc kubenswrapper[4867]: I0214 04:20:56.496670 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-xlg4t" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.053794 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j"] Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.055405 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.059013 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.066163 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j"] Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.105937 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j\" (UID: \"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.106236 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j\" (UID: \"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.106963 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm59h\" (UniqueName: \"kubernetes.io/projected/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-kube-api-access-dm59h\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j\" (UID: \"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.208871 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm59h\" (UniqueName: \"kubernetes.io/projected/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-kube-api-access-dm59h\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j\" (UID: \"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.208961 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j\" (UID: \"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.209465 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j\" (UID: \"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.209693 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j\" (UID: \"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.209994 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j\" (UID: \"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.234007 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm59h\" (UniqueName: \"kubernetes.io/projected/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-kube-api-access-dm59h\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j\" (UID: \"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.252059 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv"] Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.253426 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.257498 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv"] Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.311000 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/936b69da-ce28-43de-8fcf-82e83936de1b-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv\" (UID: \"936b69da-ce28-43de-8fcf-82e83936de1b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.311049 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/936b69da-ce28-43de-8fcf-82e83936de1b-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv\" (UID: \"936b69da-ce28-43de-8fcf-82e83936de1b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.311091 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnpvd\" (UniqueName: \"kubernetes.io/projected/936b69da-ce28-43de-8fcf-82e83936de1b-kube-api-access-pnpvd\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv\" (UID: \"936b69da-ce28-43de-8fcf-82e83936de1b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.403099 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.413030 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnpvd\" (UniqueName: \"kubernetes.io/projected/936b69da-ce28-43de-8fcf-82e83936de1b-kube-api-access-pnpvd\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv\" (UID: \"936b69da-ce28-43de-8fcf-82e83936de1b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.413184 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/936b69da-ce28-43de-8fcf-82e83936de1b-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv\" (UID: \"936b69da-ce28-43de-8fcf-82e83936de1b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.413214 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/936b69da-ce28-43de-8fcf-82e83936de1b-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv\" (UID: \"936b69da-ce28-43de-8fcf-82e83936de1b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.413852 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/936b69da-ce28-43de-8fcf-82e83936de1b-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv\" (UID: \"936b69da-ce28-43de-8fcf-82e83936de1b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.414099 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/936b69da-ce28-43de-8fcf-82e83936de1b-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv\" (UID: \"936b69da-ce28-43de-8fcf-82e83936de1b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.445127 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnpvd\" (UniqueName: \"kubernetes.io/projected/936b69da-ce28-43de-8fcf-82e83936de1b-kube-api-access-pnpvd\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv\" (UID: \"936b69da-ce28-43de-8fcf-82e83936de1b\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.577801 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" Feb 14 04:21:22 crc kubenswrapper[4867]: I0214 04:21:22.851835 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j"] Feb 14 04:21:22 crc kubenswrapper[4867]: W0214 04:21:22.862723 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf62ec3e_1c1b_400e_bdb9_ba34fc8ef5fe.slice/crio-b5e274fdc6cbf91b4e5bee40ce408a9125d51193a6f3175ada706d276c5b1981 WatchSource:0}: Error finding container b5e274fdc6cbf91b4e5bee40ce408a9125d51193a6f3175ada706d276c5b1981: Status 404 returned error can't find the container with id b5e274fdc6cbf91b4e5bee40ce408a9125d51193a6f3175ada706d276c5b1981 Feb 14 04:21:23 crc kubenswrapper[4867]: I0214 04:21:23.006391 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv"] Feb 14 04:21:23 crc kubenswrapper[4867]: W0214 04:21:23.015271 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod936b69da_ce28_43de_8fcf_82e83936de1b.slice/crio-0a21e4148e4071bf6bd851a591d5c60b2b0ebe95fb20bb5bdb5bed099e6a4944 WatchSource:0}: Error finding container 0a21e4148e4071bf6bd851a591d5c60b2b0ebe95fb20bb5bdb5bed099e6a4944: Status 404 returned error can't find the container with id 0a21e4148e4071bf6bd851a591d5c60b2b0ebe95fb20bb5bdb5bed099e6a4944 Feb 14 04:21:23 crc kubenswrapper[4867]: I0214 04:21:23.772173 4867 generic.go:334] "Generic (PLEG): container finished" podID="936b69da-ce28-43de-8fcf-82e83936de1b" containerID="6bfa7c6acd7c8c92626e99a48aecf16ab0c89ae282cd4b5118f689bf23a2ab52" exitCode=0 Feb 14 04:21:23 crc kubenswrapper[4867]: I0214 04:21:23.772247 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" event={"ID":"936b69da-ce28-43de-8fcf-82e83936de1b","Type":"ContainerDied","Data":"6bfa7c6acd7c8c92626e99a48aecf16ab0c89ae282cd4b5118f689bf23a2ab52"} Feb 14 04:21:23 crc kubenswrapper[4867]: I0214 04:21:23.772491 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" event={"ID":"936b69da-ce28-43de-8fcf-82e83936de1b","Type":"ContainerStarted","Data":"0a21e4148e4071bf6bd851a591d5c60b2b0ebe95fb20bb5bdb5bed099e6a4944"} Feb 14 04:21:23 crc kubenswrapper[4867]: I0214 04:21:23.778411 4867 generic.go:334] "Generic (PLEG): container finished" podID="af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe" containerID="9e99be5c9abef532eba38f6dff91b8ae91fc0cced050278867e635b112e193c5" exitCode=0 Feb 14 04:21:23 crc kubenswrapper[4867]: I0214 04:21:23.778456 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" event={"ID":"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe","Type":"ContainerDied","Data":"9e99be5c9abef532eba38f6dff91b8ae91fc0cced050278867e635b112e193c5"} Feb 14 04:21:23 crc kubenswrapper[4867]: I0214 04:21:23.778487 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" event={"ID":"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe","Type":"ContainerStarted","Data":"b5e274fdc6cbf91b4e5bee40ce408a9125d51193a6f3175ada706d276c5b1981"} Feb 14 04:21:25 crc kubenswrapper[4867]: I0214 04:21:25.794896 4867 generic.go:334] "Generic (PLEG): container finished" podID="936b69da-ce28-43de-8fcf-82e83936de1b" containerID="8f5d917721b65e0e84135424ad3901bd1acb03297e90b28dad3e52574bf58538" exitCode=0 Feb 14 04:21:25 crc kubenswrapper[4867]: I0214 04:21:25.794965 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" event={"ID":"936b69da-ce28-43de-8fcf-82e83936de1b","Type":"ContainerDied","Data":"8f5d917721b65e0e84135424ad3901bd1acb03297e90b28dad3e52574bf58538"} Feb 14 04:21:25 crc kubenswrapper[4867]: I0214 04:21:25.798708 4867 generic.go:334] "Generic (PLEG): container finished" podID="af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe" containerID="febc7d092e485320820377013c4367f941560dd0a33e6efe70f78f0bf91202e8" exitCode=0 Feb 14 04:21:25 crc kubenswrapper[4867]: I0214 04:21:25.798764 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" event={"ID":"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe","Type":"ContainerDied","Data":"febc7d092e485320820377013c4367f941560dd0a33e6efe70f78f0bf91202e8"} Feb 14 04:21:26 crc kubenswrapper[4867]: I0214 04:21:26.809623 4867 generic.go:334] "Generic (PLEG): container finished" podID="936b69da-ce28-43de-8fcf-82e83936de1b" containerID="70d0e664d24e7987723b393e88112cf3a22e64c5f670b8ef56e251a1202d5cd7" exitCode=0 Feb 14 04:21:26 crc kubenswrapper[4867]: I0214 04:21:26.809734 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" event={"ID":"936b69da-ce28-43de-8fcf-82e83936de1b","Type":"ContainerDied","Data":"70d0e664d24e7987723b393e88112cf3a22e64c5f670b8ef56e251a1202d5cd7"} Feb 14 04:21:26 crc kubenswrapper[4867]: I0214 04:21:26.812721 4867 generic.go:334] "Generic (PLEG): container finished" podID="af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe" containerID="89369855acb4f048b792a2b970408f0f9a668d7d4ff843ff88f8102b02cc83d4" exitCode=0 Feb 14 04:21:26 crc kubenswrapper[4867]: I0214 04:21:26.812770 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" event={"ID":"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe","Type":"ContainerDied","Data":"89369855acb4f048b792a2b970408f0f9a668d7d4ff843ff88f8102b02cc83d4"} Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.147789 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.153212 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.211208 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/936b69da-ce28-43de-8fcf-82e83936de1b-util\") pod \"936b69da-ce28-43de-8fcf-82e83936de1b\" (UID: \"936b69da-ce28-43de-8fcf-82e83936de1b\") " Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.211261 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-util\") pod \"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe\" (UID: \"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe\") " Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.211287 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnpvd\" (UniqueName: \"kubernetes.io/projected/936b69da-ce28-43de-8fcf-82e83936de1b-kube-api-access-pnpvd\") pod \"936b69da-ce28-43de-8fcf-82e83936de1b\" (UID: \"936b69da-ce28-43de-8fcf-82e83936de1b\") " Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.211323 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/936b69da-ce28-43de-8fcf-82e83936de1b-bundle\") pod \"936b69da-ce28-43de-8fcf-82e83936de1b\" (UID: \"936b69da-ce28-43de-8fcf-82e83936de1b\") " Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.211439 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dm59h\" (UniqueName: \"kubernetes.io/projected/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-kube-api-access-dm59h\") pod \"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe\" (UID: \"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe\") " Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.211470 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-bundle\") pod \"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe\" (UID: \"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe\") " Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.212443 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/936b69da-ce28-43de-8fcf-82e83936de1b-bundle" (OuterVolumeSpecName: "bundle") pod "936b69da-ce28-43de-8fcf-82e83936de1b" (UID: "936b69da-ce28-43de-8fcf-82e83936de1b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.212535 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-bundle" (OuterVolumeSpecName: "bundle") pod "af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe" (UID: "af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.219010 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/936b69da-ce28-43de-8fcf-82e83936de1b-kube-api-access-pnpvd" (OuterVolumeSpecName: "kube-api-access-pnpvd") pod "936b69da-ce28-43de-8fcf-82e83936de1b" (UID: "936b69da-ce28-43de-8fcf-82e83936de1b"). InnerVolumeSpecName "kube-api-access-pnpvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.219825 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-kube-api-access-dm59h" (OuterVolumeSpecName: "kube-api-access-dm59h") pod "af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe" (UID: "af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe"). InnerVolumeSpecName "kube-api-access-dm59h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.226383 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/936b69da-ce28-43de-8fcf-82e83936de1b-util" (OuterVolumeSpecName: "util") pod "936b69da-ce28-43de-8fcf-82e83936de1b" (UID: "936b69da-ce28-43de-8fcf-82e83936de1b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.312976 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dm59h\" (UniqueName: \"kubernetes.io/projected/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-kube-api-access-dm59h\") on node \"crc\" DevicePath \"\"" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.313006 4867 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.313018 4867 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/936b69da-ce28-43de-8fcf-82e83936de1b-util\") on node \"crc\" DevicePath \"\"" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.313047 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnpvd\" (UniqueName: \"kubernetes.io/projected/936b69da-ce28-43de-8fcf-82e83936de1b-kube-api-access-pnpvd\") on node \"crc\" DevicePath \"\"" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.313061 4867 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/936b69da-ce28-43de-8fcf-82e83936de1b-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.486447 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-util" (OuterVolumeSpecName: "util") pod "af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe" (UID: "af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.517235 4867 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe-util\") on node \"crc\" DevicePath \"\"" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.832200 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" event={"ID":"af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe","Type":"ContainerDied","Data":"b5e274fdc6cbf91b4e5bee40ce408a9125d51193a6f3175ada706d276c5b1981"} Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.832243 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5e274fdc6cbf91b4e5bee40ce408a9125d51193a6f3175ada706d276c5b1981" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.832329 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.837007 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.836983 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv" event={"ID":"936b69da-ce28-43de-8fcf-82e83936de1b","Type":"ContainerDied","Data":"0a21e4148e4071bf6bd851a591d5c60b2b0ebe95fb20bb5bdb5bed099e6a4944"} Feb 14 04:21:28 crc kubenswrapper[4867]: I0214 04:21:28.837196 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a21e4148e4071bf6bd851a591d5c60b2b0ebe95fb20bb5bdb5bed099e6a4944" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.421116 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf"] Feb 14 04:21:39 crc kubenswrapper[4867]: E0214 04:21:39.421919 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="936b69da-ce28-43de-8fcf-82e83936de1b" containerName="extract" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.421934 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="936b69da-ce28-43de-8fcf-82e83936de1b" containerName="extract" Feb 14 04:21:39 crc kubenswrapper[4867]: E0214 04:21:39.421948 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="936b69da-ce28-43de-8fcf-82e83936de1b" containerName="pull" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.421955 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="936b69da-ce28-43de-8fcf-82e83936de1b" containerName="pull" Feb 14 04:21:39 crc kubenswrapper[4867]: E0214 04:21:39.421966 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="936b69da-ce28-43de-8fcf-82e83936de1b" containerName="util" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.421972 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="936b69da-ce28-43de-8fcf-82e83936de1b" containerName="util" Feb 14 04:21:39 crc kubenswrapper[4867]: E0214 04:21:39.421992 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe" containerName="extract" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.421998 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe" containerName="extract" Feb 14 04:21:39 crc kubenswrapper[4867]: E0214 04:21:39.422009 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe" containerName="pull" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.422015 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe" containerName="pull" Feb 14 04:21:39 crc kubenswrapper[4867]: E0214 04:21:39.422027 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe" containerName="util" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.422032 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe" containerName="util" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.422162 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe" containerName="extract" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.422178 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="936b69da-ce28-43de-8fcf-82e83936de1b" containerName="extract" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.422904 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.429151 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.429424 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.430094 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.430231 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.430340 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.430459 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-7c2k7" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.463378 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf"] Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.580487 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4a918644-d451-4f71-8a69-627b0de1ebb7-apiservice-cert\") pod \"loki-operator-controller-manager-5479889c99-ltnxf\" (UID: \"4a918644-d451-4f71-8a69-627b0de1ebb7\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.580540 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/4a918644-d451-4f71-8a69-627b0de1ebb7-manager-config\") pod \"loki-operator-controller-manager-5479889c99-ltnxf\" (UID: \"4a918644-d451-4f71-8a69-627b0de1ebb7\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.580587 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2rd6\" (UniqueName: \"kubernetes.io/projected/4a918644-d451-4f71-8a69-627b0de1ebb7-kube-api-access-b2rd6\") pod \"loki-operator-controller-manager-5479889c99-ltnxf\" (UID: \"4a918644-d451-4f71-8a69-627b0de1ebb7\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.580710 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4a918644-d451-4f71-8a69-627b0de1ebb7-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5479889c99-ltnxf\" (UID: \"4a918644-d451-4f71-8a69-627b0de1ebb7\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.580785 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4a918644-d451-4f71-8a69-627b0de1ebb7-webhook-cert\") pod \"loki-operator-controller-manager-5479889c99-ltnxf\" (UID: \"4a918644-d451-4f71-8a69-627b0de1ebb7\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.681944 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4a918644-d451-4f71-8a69-627b0de1ebb7-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5479889c99-ltnxf\" (UID: \"4a918644-d451-4f71-8a69-627b0de1ebb7\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.682012 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4a918644-d451-4f71-8a69-627b0de1ebb7-webhook-cert\") pod \"loki-operator-controller-manager-5479889c99-ltnxf\" (UID: \"4a918644-d451-4f71-8a69-627b0de1ebb7\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.682062 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4a918644-d451-4f71-8a69-627b0de1ebb7-apiservice-cert\") pod \"loki-operator-controller-manager-5479889c99-ltnxf\" (UID: \"4a918644-d451-4f71-8a69-627b0de1ebb7\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.682082 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/4a918644-d451-4f71-8a69-627b0de1ebb7-manager-config\") pod \"loki-operator-controller-manager-5479889c99-ltnxf\" (UID: \"4a918644-d451-4f71-8a69-627b0de1ebb7\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.682123 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2rd6\" (UniqueName: \"kubernetes.io/projected/4a918644-d451-4f71-8a69-627b0de1ebb7-kube-api-access-b2rd6\") pod \"loki-operator-controller-manager-5479889c99-ltnxf\" (UID: \"4a918644-d451-4f71-8a69-627b0de1ebb7\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.683231 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/4a918644-d451-4f71-8a69-627b0de1ebb7-manager-config\") pod \"loki-operator-controller-manager-5479889c99-ltnxf\" (UID: \"4a918644-d451-4f71-8a69-627b0de1ebb7\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.689087 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4a918644-d451-4f71-8a69-627b0de1ebb7-webhook-cert\") pod \"loki-operator-controller-manager-5479889c99-ltnxf\" (UID: \"4a918644-d451-4f71-8a69-627b0de1ebb7\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.691138 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4a918644-d451-4f71-8a69-627b0de1ebb7-apiservice-cert\") pod \"loki-operator-controller-manager-5479889c99-ltnxf\" (UID: \"4a918644-d451-4f71-8a69-627b0de1ebb7\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.695111 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4a918644-d451-4f71-8a69-627b0de1ebb7-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5479889c99-ltnxf\" (UID: \"4a918644-d451-4f71-8a69-627b0de1ebb7\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.738255 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2rd6\" (UniqueName: \"kubernetes.io/projected/4a918644-d451-4f71-8a69-627b0de1ebb7-kube-api-access-b2rd6\") pod \"loki-operator-controller-manager-5479889c99-ltnxf\" (UID: \"4a918644-d451-4f71-8a69-627b0de1ebb7\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:39 crc kubenswrapper[4867]: I0214 04:21:39.744006 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:40 crc kubenswrapper[4867]: I0214 04:21:40.042288 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf"] Feb 14 04:21:40 crc kubenswrapper[4867]: I0214 04:21:40.969178 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" event={"ID":"4a918644-d451-4f71-8a69-627b0de1ebb7","Type":"ContainerStarted","Data":"b994afbb522d24b99f5b88b1fbd3b41a5d82670388c2b6f24ccd5dd218f84162"} Feb 14 04:21:42 crc kubenswrapper[4867]: I0214 04:21:42.169206 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-pmdnk"] Feb 14 04:21:42 crc kubenswrapper[4867]: I0214 04:21:42.170064 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-pmdnk" Feb 14 04:21:42 crc kubenswrapper[4867]: I0214 04:21:42.171962 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Feb 14 04:21:42 crc kubenswrapper[4867]: I0214 04:21:42.172210 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Feb 14 04:21:42 crc kubenswrapper[4867]: I0214 04:21:42.172686 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-v884l" Feb 14 04:21:42 crc kubenswrapper[4867]: I0214 04:21:42.189310 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-pmdnk"] Feb 14 04:21:42 crc kubenswrapper[4867]: I0214 04:21:42.231154 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vmf2\" (UniqueName: \"kubernetes.io/projected/89b20edb-1b24-48e1-accf-f0a2b65c8da1-kube-api-access-6vmf2\") pod \"cluster-logging-operator-c769fd969-pmdnk\" (UID: \"89b20edb-1b24-48e1-accf-f0a2b65c8da1\") " pod="openshift-logging/cluster-logging-operator-c769fd969-pmdnk" Feb 14 04:21:42 crc kubenswrapper[4867]: I0214 04:21:42.332195 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vmf2\" (UniqueName: \"kubernetes.io/projected/89b20edb-1b24-48e1-accf-f0a2b65c8da1-kube-api-access-6vmf2\") pod \"cluster-logging-operator-c769fd969-pmdnk\" (UID: \"89b20edb-1b24-48e1-accf-f0a2b65c8da1\") " pod="openshift-logging/cluster-logging-operator-c769fd969-pmdnk" Feb 14 04:21:42 crc kubenswrapper[4867]: I0214 04:21:42.361111 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vmf2\" (UniqueName: \"kubernetes.io/projected/89b20edb-1b24-48e1-accf-f0a2b65c8da1-kube-api-access-6vmf2\") pod \"cluster-logging-operator-c769fd969-pmdnk\" (UID: \"89b20edb-1b24-48e1-accf-f0a2b65c8da1\") " pod="openshift-logging/cluster-logging-operator-c769fd969-pmdnk" Feb 14 04:21:42 crc kubenswrapper[4867]: I0214 04:21:42.488306 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-pmdnk" Feb 14 04:21:42 crc kubenswrapper[4867]: I0214 04:21:42.993964 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-pmdnk"] Feb 14 04:21:46 crc kubenswrapper[4867]: I0214 04:21:46.007035 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-pmdnk" event={"ID":"89b20edb-1b24-48e1-accf-f0a2b65c8da1","Type":"ContainerStarted","Data":"6b1e31a6875202fbbbcbddba12eee32dd303506c468bcfc3eeffb2edb2233e83"} Feb 14 04:21:46 crc kubenswrapper[4867]: I0214 04:21:46.009210 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" event={"ID":"4a918644-d451-4f71-8a69-627b0de1ebb7","Type":"ContainerStarted","Data":"45aa757658fb299c4e4089cef9945c1427c62ec817c7670b4ba12f2330eb044e"} Feb 14 04:21:58 crc kubenswrapper[4867]: I0214 04:21:58.116686 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-pmdnk" event={"ID":"89b20edb-1b24-48e1-accf-f0a2b65c8da1","Type":"ContainerStarted","Data":"3cf55d4e6e13765ab8cd9dc9a5d145fd9be51067503785dcd4d85e10f972cae1"} Feb 14 04:21:58 crc kubenswrapper[4867]: I0214 04:21:58.119392 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" event={"ID":"4a918644-d451-4f71-8a69-627b0de1ebb7","Type":"ContainerStarted","Data":"d87cafe09abaf2bf091dfef60ad31bf9fbb60a8b8a09fb6c7224b5451333cab6"} Feb 14 04:21:58 crc kubenswrapper[4867]: I0214 04:21:58.119647 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:58 crc kubenswrapper[4867]: I0214 04:21:58.121898 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 04:21:58 crc kubenswrapper[4867]: I0214 04:21:58.139694 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-c769fd969-pmdnk" podStartSLOduration=3.921919325 podStartE2EDuration="16.139668184s" podCreationTimestamp="2026-02-14 04:21:42 +0000 UTC" firstStartedPulling="2026-02-14 04:21:45.155871913 +0000 UTC m=+737.236809227" lastFinishedPulling="2026-02-14 04:21:57.373620772 +0000 UTC m=+749.454558086" observedRunningTime="2026-02-14 04:21:58.132691004 +0000 UTC m=+750.213628318" watchObservedRunningTime="2026-02-14 04:21:58.139668184 +0000 UTC m=+750.220605498" Feb 14 04:21:58 crc kubenswrapper[4867]: I0214 04:21:58.175968 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" podStartSLOduration=1.856826361 podStartE2EDuration="19.175951042s" podCreationTimestamp="2026-02-14 04:21:39 +0000 UTC" firstStartedPulling="2026-02-14 04:21:40.055254091 +0000 UTC m=+732.136191405" lastFinishedPulling="2026-02-14 04:21:57.374378772 +0000 UTC m=+749.455316086" observedRunningTime="2026-02-14 04:21:58.174269938 +0000 UTC m=+750.255207292" watchObservedRunningTime="2026-02-14 04:21:58.175951042 +0000 UTC m=+750.256888356" Feb 14 04:22:00 crc kubenswrapper[4867]: I0214 04:22:00.886390 4867 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 14 04:22:01 crc kubenswrapper[4867]: I0214 04:22:01.251226 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:22:01 crc kubenswrapper[4867]: I0214 04:22:01.251323 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:22:02 crc kubenswrapper[4867]: I0214 04:22:02.451744 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 14 04:22:02 crc kubenswrapper[4867]: I0214 04:22:02.453778 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 14 04:22:02 crc kubenswrapper[4867]: I0214 04:22:02.457162 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 14 04:22:02 crc kubenswrapper[4867]: I0214 04:22:02.457444 4867 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-vzthp" Feb 14 04:22:02 crc kubenswrapper[4867]: I0214 04:22:02.457681 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 14 04:22:02 crc kubenswrapper[4867]: I0214 04:22:02.460032 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 14 04:22:02 crc kubenswrapper[4867]: I0214 04:22:02.482242 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c5183e0b-0b24-4d6e-b6f6-c0b18653433e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c5183e0b-0b24-4d6e-b6f6-c0b18653433e\") pod \"minio\" (UID: \"ca1edb5b-df43-4a3d-83ea-01030d18e02e\") " pod="minio-dev/minio" Feb 14 04:22:02 crc kubenswrapper[4867]: I0214 04:22:02.482343 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8lr8\" (UniqueName: \"kubernetes.io/projected/ca1edb5b-df43-4a3d-83ea-01030d18e02e-kube-api-access-q8lr8\") pod \"minio\" (UID: \"ca1edb5b-df43-4a3d-83ea-01030d18e02e\") " pod="minio-dev/minio" Feb 14 04:22:02 crc kubenswrapper[4867]: I0214 04:22:02.583465 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c5183e0b-0b24-4d6e-b6f6-c0b18653433e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c5183e0b-0b24-4d6e-b6f6-c0b18653433e\") pod \"minio\" (UID: \"ca1edb5b-df43-4a3d-83ea-01030d18e02e\") " pod="minio-dev/minio" Feb 14 04:22:02 crc kubenswrapper[4867]: I0214 04:22:02.583568 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8lr8\" (UniqueName: \"kubernetes.io/projected/ca1edb5b-df43-4a3d-83ea-01030d18e02e-kube-api-access-q8lr8\") pod \"minio\" (UID: \"ca1edb5b-df43-4a3d-83ea-01030d18e02e\") " pod="minio-dev/minio" Feb 14 04:22:02 crc kubenswrapper[4867]: I0214 04:22:02.586631 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:22:02 crc kubenswrapper[4867]: I0214 04:22:02.586668 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c5183e0b-0b24-4d6e-b6f6-c0b18653433e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c5183e0b-0b24-4d6e-b6f6-c0b18653433e\") pod \"minio\" (UID: \"ca1edb5b-df43-4a3d-83ea-01030d18e02e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/eb91cce80dbcfbcaac1779d0ca18fe386616c5db8c3101f1555325d53b799300/globalmount\"" pod="minio-dev/minio" Feb 14 04:22:02 crc kubenswrapper[4867]: I0214 04:22:02.601764 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8lr8\" (UniqueName: \"kubernetes.io/projected/ca1edb5b-df43-4a3d-83ea-01030d18e02e-kube-api-access-q8lr8\") pod \"minio\" (UID: \"ca1edb5b-df43-4a3d-83ea-01030d18e02e\") " pod="minio-dev/minio" Feb 14 04:22:02 crc kubenswrapper[4867]: I0214 04:22:02.627936 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c5183e0b-0b24-4d6e-b6f6-c0b18653433e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c5183e0b-0b24-4d6e-b6f6-c0b18653433e\") pod \"minio\" (UID: \"ca1edb5b-df43-4a3d-83ea-01030d18e02e\") " pod="minio-dev/minio" Feb 14 04:22:02 crc kubenswrapper[4867]: I0214 04:22:02.799075 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 14 04:22:03 crc kubenswrapper[4867]: I0214 04:22:03.449333 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 14 04:22:04 crc kubenswrapper[4867]: I0214 04:22:04.162593 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"ca1edb5b-df43-4a3d-83ea-01030d18e02e","Type":"ContainerStarted","Data":"46a2d9186bae73d2af69ff00c8a20c35525c98e39295b27fccdeeb957d08e4e1"} Feb 14 04:22:08 crc kubenswrapper[4867]: I0214 04:22:08.189636 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"ca1edb5b-df43-4a3d-83ea-01030d18e02e","Type":"ContainerStarted","Data":"9c21f44f4c013c7abebbf1fb3807ffb0abaac14a1032897789288eda0314b507"} Feb 14 04:22:08 crc kubenswrapper[4867]: I0214 04:22:08.206644 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=5.641241569 podStartE2EDuration="9.206619008s" podCreationTimestamp="2026-02-14 04:21:59 +0000 UTC" firstStartedPulling="2026-02-14 04:22:03.465056596 +0000 UTC m=+755.545993910" lastFinishedPulling="2026-02-14 04:22:07.030434035 +0000 UTC m=+759.111371349" observedRunningTime="2026-02-14 04:22:08.201707781 +0000 UTC m=+760.282645115" watchObservedRunningTime="2026-02-14 04:22:08.206619008 +0000 UTC m=+760.287556362" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.258195 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp"] Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.260356 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.262242 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp"] Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.267430 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-xq9r8" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.267618 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.267808 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.267817 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.272103 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.326893 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh797\" (UniqueName: \"kubernetes.io/projected/c9201352-8585-47d4-9c13-b9e21ac4cd9f-kube-api-access-bh797\") pod \"logging-loki-distributor-5d5548c9f5-7zdqp\" (UID: \"c9201352-8585-47d4-9c13-b9e21ac4cd9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.327475 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9201352-8585-47d4-9c13-b9e21ac4cd9f-config\") pod \"logging-loki-distributor-5d5548c9f5-7zdqp\" (UID: \"c9201352-8585-47d4-9c13-b9e21ac4cd9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.327603 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/c9201352-8585-47d4-9c13-b9e21ac4cd9f-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-7zdqp\" (UID: \"c9201352-8585-47d4-9c13-b9e21ac4cd9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.327697 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/c9201352-8585-47d4-9c13-b9e21ac4cd9f-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-7zdqp\" (UID: \"c9201352-8585-47d4-9c13-b9e21ac4cd9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.327806 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9201352-8585-47d4-9c13-b9e21ac4cd9f-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-7zdqp\" (UID: \"c9201352-8585-47d4-9c13-b9e21ac4cd9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.429610 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh797\" (UniqueName: \"kubernetes.io/projected/c9201352-8585-47d4-9c13-b9e21ac4cd9f-kube-api-access-bh797\") pod \"logging-loki-distributor-5d5548c9f5-7zdqp\" (UID: \"c9201352-8585-47d4-9c13-b9e21ac4cd9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.429913 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9201352-8585-47d4-9c13-b9e21ac4cd9f-config\") pod \"logging-loki-distributor-5d5548c9f5-7zdqp\" (UID: \"c9201352-8585-47d4-9c13-b9e21ac4cd9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.430008 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/c9201352-8585-47d4-9c13-b9e21ac4cd9f-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-7zdqp\" (UID: \"c9201352-8585-47d4-9c13-b9e21ac4cd9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.430106 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/c9201352-8585-47d4-9c13-b9e21ac4cd9f-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-7zdqp\" (UID: \"c9201352-8585-47d4-9c13-b9e21ac4cd9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.430212 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9201352-8585-47d4-9c13-b9e21ac4cd9f-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-7zdqp\" (UID: \"c9201352-8585-47d4-9c13-b9e21ac4cd9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.431197 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9201352-8585-47d4-9c13-b9e21ac4cd9f-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-7zdqp\" (UID: \"c9201352-8585-47d4-9c13-b9e21ac4cd9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.431278 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9201352-8585-47d4-9c13-b9e21ac4cd9f-config\") pod \"logging-loki-distributor-5d5548c9f5-7zdqp\" (UID: \"c9201352-8585-47d4-9c13-b9e21ac4cd9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.445585 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/c9201352-8585-47d4-9c13-b9e21ac4cd9f-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-7zdqp\" (UID: \"c9201352-8585-47d4-9c13-b9e21ac4cd9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.448223 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/c9201352-8585-47d4-9c13-b9e21ac4cd9f-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-7zdqp\" (UID: \"c9201352-8585-47d4-9c13-b9e21ac4cd9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.472884 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh797\" (UniqueName: \"kubernetes.io/projected/c9201352-8585-47d4-9c13-b9e21ac4cd9f-kube-api-access-bh797\") pod \"logging-loki-distributor-5d5548c9f5-7zdqp\" (UID: \"c9201352-8585-47d4-9c13-b9e21ac4cd9f\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.510220 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-5td7f"] Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.511038 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.524001 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.524337 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.524551 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.543826 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-5td7f"] Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.583559 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.637122 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/9c48c070-b4b3-48af-b40a-d82788f764d9-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.637178 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb885\" (UniqueName: \"kubernetes.io/projected/9c48c070-b4b3-48af-b40a-d82788f764d9-kube-api-access-jb885\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.637197 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/9c48c070-b4b3-48af-b40a-d82788f764d9-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.637262 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c48c070-b4b3-48af-b40a-d82788f764d9-config\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.637285 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/9c48c070-b4b3-48af-b40a-d82788f764d9-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.637304 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c48c070-b4b3-48af-b40a-d82788f764d9-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.694150 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp"] Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.694969 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.700934 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.701109 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.717480 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp"] Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.740220 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c48c070-b4b3-48af-b40a-d82788f764d9-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.740281 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/837b4fe4-f827-4882-8af7-225b18bb3e22-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-cfcbp\" (UID: \"837b4fe4-f827-4882-8af7-225b18bb3e22\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.740309 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/837b4fe4-f827-4882-8af7-225b18bb3e22-config\") pod \"logging-loki-query-frontend-6d6859c548-cfcbp\" (UID: \"837b4fe4-f827-4882-8af7-225b18bb3e22\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.740339 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/9c48c070-b4b3-48af-b40a-d82788f764d9-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.740368 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/9c48c070-b4b3-48af-b40a-d82788f764d9-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.740383 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb885\" (UniqueName: \"kubernetes.io/projected/9c48c070-b4b3-48af-b40a-d82788f764d9-kube-api-access-jb885\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.740460 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnwwr\" (UniqueName: \"kubernetes.io/projected/837b4fe4-f827-4882-8af7-225b18bb3e22-kube-api-access-fnwwr\") pod \"logging-loki-query-frontend-6d6859c548-cfcbp\" (UID: \"837b4fe4-f827-4882-8af7-225b18bb3e22\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.740490 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/837b4fe4-f827-4882-8af7-225b18bb3e22-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-cfcbp\" (UID: \"837b4fe4-f827-4882-8af7-225b18bb3e22\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.740528 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c48c070-b4b3-48af-b40a-d82788f764d9-config\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.740551 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/9c48c070-b4b3-48af-b40a-d82788f764d9-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.740569 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/837b4fe4-f827-4882-8af7-225b18bb3e22-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-cfcbp\" (UID: \"837b4fe4-f827-4882-8af7-225b18bb3e22\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.741424 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c48c070-b4b3-48af-b40a-d82788f764d9-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.747452 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c48c070-b4b3-48af-b40a-d82788f764d9-config\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.752532 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/9c48c070-b4b3-48af-b40a-d82788f764d9-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.752826 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/9c48c070-b4b3-48af-b40a-d82788f764d9-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.753473 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/9c48c070-b4b3-48af-b40a-d82788f764d9-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.776681 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb885\" (UniqueName: \"kubernetes.io/projected/9c48c070-b4b3-48af-b40a-d82788f764d9-kube-api-access-jb885\") pod \"logging-loki-querier-76bf7b6d45-5td7f\" (UID: \"9c48c070-b4b3-48af-b40a-d82788f764d9\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.843000 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnwwr\" (UniqueName: \"kubernetes.io/projected/837b4fe4-f827-4882-8af7-225b18bb3e22-kube-api-access-fnwwr\") pod \"logging-loki-query-frontend-6d6859c548-cfcbp\" (UID: \"837b4fe4-f827-4882-8af7-225b18bb3e22\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.843068 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/837b4fe4-f827-4882-8af7-225b18bb3e22-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-cfcbp\" (UID: \"837b4fe4-f827-4882-8af7-225b18bb3e22\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.843105 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/837b4fe4-f827-4882-8af7-225b18bb3e22-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-cfcbp\" (UID: \"837b4fe4-f827-4882-8af7-225b18bb3e22\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.843148 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/837b4fe4-f827-4882-8af7-225b18bb3e22-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-cfcbp\" (UID: \"837b4fe4-f827-4882-8af7-225b18bb3e22\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.843174 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/837b4fe4-f827-4882-8af7-225b18bb3e22-config\") pod \"logging-loki-query-frontend-6d6859c548-cfcbp\" (UID: \"837b4fe4-f827-4882-8af7-225b18bb3e22\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.844224 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/837b4fe4-f827-4882-8af7-225b18bb3e22-config\") pod \"logging-loki-query-frontend-6d6859c548-cfcbp\" (UID: \"837b4fe4-f827-4882-8af7-225b18bb3e22\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.844864 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/837b4fe4-f827-4882-8af7-225b18bb3e22-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-cfcbp\" (UID: \"837b4fe4-f827-4882-8af7-225b18bb3e22\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.862487 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/837b4fe4-f827-4882-8af7-225b18bb3e22-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-cfcbp\" (UID: \"837b4fe4-f827-4882-8af7-225b18bb3e22\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.862579 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-767ffcbf75-l82l4"] Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.871445 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-767ffcbf75-md7ts"] Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.873612 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.873683 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.882351 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.888619 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.888925 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.889170 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.889289 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.889402 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-nrktg" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.889619 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.897177 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnwwr\" (UniqueName: \"kubernetes.io/projected/837b4fe4-f827-4882-8af7-225b18bb3e22-kube-api-access-fnwwr\") pod \"logging-loki-query-frontend-6d6859c548-cfcbp\" (UID: \"837b4fe4-f827-4882-8af7-225b18bb3e22\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.907243 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/837b4fe4-f827-4882-8af7-225b18bb3e22-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-cfcbp\" (UID: \"837b4fe4-f827-4882-8af7-225b18bb3e22\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.916726 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-767ffcbf75-l82l4"] Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.942316 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-767ffcbf75-md7ts"] Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.944716 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d28844dc-6974-446b-bd9a-b22586858387-lokistack-gateway\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.944915 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-tenants\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.944935 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.944958 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-logging-loki-ca-bundle\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.944975 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-lokistack-gateway\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.944993 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-tls-secret\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.945018 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d28844dc-6974-446b-bd9a-b22586858387-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.945042 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brz4x\" (UniqueName: \"kubernetes.io/projected/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-kube-api-access-brz4x\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.945064 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-rbac\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.945088 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpx5r\" (UniqueName: \"kubernetes.io/projected/d28844dc-6974-446b-bd9a-b22586858387-kube-api-access-qpx5r\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.945228 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d28844dc-6974-446b-bd9a-b22586858387-logging-loki-ca-bundle\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.945282 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d28844dc-6974-446b-bd9a-b22586858387-tls-secret\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.945300 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d28844dc-6974-446b-bd9a-b22586858387-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.945335 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d28844dc-6974-446b-bd9a-b22586858387-rbac\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.945464 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d28844dc-6974-446b-bd9a-b22586858387-tenants\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:13 crc kubenswrapper[4867]: I0214 04:22:13.945486 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.046316 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d28844dc-6974-446b-bd9a-b22586858387-logging-loki-ca-bundle\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.046374 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d28844dc-6974-446b-bd9a-b22586858387-tls-secret\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.046405 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d28844dc-6974-446b-bd9a-b22586858387-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.046460 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d28844dc-6974-446b-bd9a-b22586858387-rbac\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.046524 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d28844dc-6974-446b-bd9a-b22586858387-tenants\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.046550 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.046626 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-tenants\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.046648 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d28844dc-6974-446b-bd9a-b22586858387-lokistack-gateway\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.046676 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.046717 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-logging-loki-ca-bundle\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.046739 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-lokistack-gateway\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.046760 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-tls-secret\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.046794 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d28844dc-6974-446b-bd9a-b22586858387-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.046839 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brz4x\" (UniqueName: \"kubernetes.io/projected/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-kube-api-access-brz4x\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.046867 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-rbac\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.046907 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpx5r\" (UniqueName: \"kubernetes.io/projected/d28844dc-6974-446b-bd9a-b22586858387-kube-api-access-qpx5r\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.048229 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d28844dc-6974-446b-bd9a-b22586858387-rbac\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.048693 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d28844dc-6974-446b-bd9a-b22586858387-logging-loki-ca-bundle\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: E0214 04:22:14.048791 4867 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Feb 14 04:22:14 crc kubenswrapper[4867]: E0214 04:22:14.048844 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d28844dc-6974-446b-bd9a-b22586858387-tls-secret podName:d28844dc-6974-446b-bd9a-b22586858387 nodeName:}" failed. No retries permitted until 2026-02-14 04:22:14.548826949 +0000 UTC m=+766.629764353 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/d28844dc-6974-446b-bd9a-b22586858387-tls-secret") pod "logging-loki-gateway-767ffcbf75-md7ts" (UID: "d28844dc-6974-446b-bd9a-b22586858387") : secret "logging-loki-gateway-http" not found Feb 14 04:22:14 crc kubenswrapper[4867]: E0214 04:22:14.054593 4867 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Feb 14 04:22:14 crc kubenswrapper[4867]: E0214 04:22:14.054647 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-tls-secret podName:0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5 nodeName:}" failed. No retries permitted until 2026-02-14 04:22:14.554630859 +0000 UTC m=+766.635568173 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-tls-secret") pod "logging-loki-gateway-767ffcbf75-l82l4" (UID: "0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5") : secret "logging-loki-gateway-http" not found Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.054825 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-logging-loki-ca-bundle\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.054838 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.055546 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-lokistack-gateway\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.055767 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d28844dc-6974-446b-bd9a-b22586858387-lokistack-gateway\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.056779 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-rbac\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.058405 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d28844dc-6974-446b-bd9a-b22586858387-tenants\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.059247 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.060418 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d28844dc-6974-446b-bd9a-b22586858387-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.061840 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-tenants\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.062149 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.070054 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpx5r\" (UniqueName: \"kubernetes.io/projected/d28844dc-6974-446b-bd9a-b22586858387-kube-api-access-qpx5r\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.074867 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d28844dc-6974-446b-bd9a-b22586858387-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.081193 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp"] Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.089563 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brz4x\" (UniqueName: \"kubernetes.io/projected/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-kube-api-access-brz4x\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.274427 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" event={"ID":"c9201352-8585-47d4-9c13-b9e21ac4cd9f","Type":"ContainerStarted","Data":"d6f02e514e7f08c4229f0d59d59d435798427e01cc5fc499d4c40561df5d700a"} Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.451277 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.454133 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.458794 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.459031 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.488371 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-5td7f"] Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.501103 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.550611 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.551437 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.553406 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c5d2e2aa-1056-4380-a637-cb59984f8098\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c5d2e2aa-1056-4380-a637-cb59984f8098\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.553444 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d28844dc-6974-446b-bd9a-b22586858387-tls-secret\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.554368 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.554494 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.560891 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d28844dc-6974-446b-bd9a-b22586858387-tls-secret\") pod \"logging-loki-gateway-767ffcbf75-md7ts\" (UID: \"d28844dc-6974-446b-bd9a-b22586858387\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.563349 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.646280 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp"] Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.655003 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-tls-secret\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.655058 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/775ca902-fd03-4191-9440-ea598768d4e6-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.655127 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c5d2e2aa-1056-4380-a637-cb59984f8098\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c5d2e2aa-1056-4380-a637-cb59984f8098\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.655221 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a9c75345-8af5-49da-bd74-3fd013a2bafd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a9c75345-8af5-49da-bd74-3fd013a2bafd\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.655276 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/775ca902-fd03-4191-9440-ea598768d4e6-config\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.655417 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9zmn\" (UniqueName: \"kubernetes.io/projected/775ca902-fd03-4191-9440-ea598768d4e6-kube-api-access-l9zmn\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.655481 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/775ca902-fd03-4191-9440-ea598768d4e6-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.655539 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/775ca902-fd03-4191-9440-ea598768d4e6-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.656001 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/775ca902-fd03-4191-9440-ea598768d4e6-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.659631 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.659679 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c5d2e2aa-1056-4380-a637-cb59984f8098\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c5d2e2aa-1056-4380-a637-cb59984f8098\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bf92a0cb196b1b992931cfb10952aecbe618752564bc39ecf0c6e130663d619b/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.660617 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5-tls-secret\") pod \"logging-loki-gateway-767ffcbf75-l82l4\" (UID: \"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5\") " pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.689079 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c5d2e2aa-1056-4380-a637-cb59984f8098\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c5d2e2aa-1056-4380-a637-cb59984f8098\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.757405 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a9c75345-8af5-49da-bd74-3fd013a2bafd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a9c75345-8af5-49da-bd74-3fd013a2bafd\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.757460 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/775ca902-fd03-4191-9440-ea598768d4e6-config\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.757530 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/6975f95f-884b-4952-8bf8-0d18537e3403-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.757559 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9zmn\" (UniqueName: \"kubernetes.io/projected/775ca902-fd03-4191-9440-ea598768d4e6-kube-api-access-l9zmn\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.757586 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/775ca902-fd03-4191-9440-ea598768d4e6-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.757608 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/775ca902-fd03-4191-9440-ea598768d4e6-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.757640 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c95492d6-57e6-4336-afce-f3d2d1a9a88d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c95492d6-57e6-4336-afce-f3d2d1a9a88d\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.757700 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/775ca902-fd03-4191-9440-ea598768d4e6-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.758264 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6975f95f-884b-4952-8bf8-0d18537e3403-config\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.758308 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shzlw\" (UniqueName: \"kubernetes.io/projected/6975f95f-884b-4952-8bf8-0d18537e3403-kube-api-access-shzlw\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.758346 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/6975f95f-884b-4952-8bf8-0d18537e3403-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.758387 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/775ca902-fd03-4191-9440-ea598768d4e6-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.758424 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6975f95f-884b-4952-8bf8-0d18537e3403-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.758635 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/6975f95f-884b-4952-8bf8-0d18537e3403-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.759187 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/775ca902-fd03-4191-9440-ea598768d4e6-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.759444 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/775ca902-fd03-4191-9440-ea598768d4e6-config\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.760866 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.760903 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a9c75345-8af5-49da-bd74-3fd013a2bafd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a9c75345-8af5-49da-bd74-3fd013a2bafd\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4319a35af2a769dfaedede67f86e0598a0eb8249043dc7339b30d4dc2ae902c5/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.761399 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/775ca902-fd03-4191-9440-ea598768d4e6-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.761689 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/775ca902-fd03-4191-9440-ea598768d4e6-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.761846 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/775ca902-fd03-4191-9440-ea598768d4e6-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.779187 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9zmn\" (UniqueName: \"kubernetes.io/projected/775ca902-fd03-4191-9440-ea598768d4e6-kube-api-access-l9zmn\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.779597 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.780473 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.785452 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.785884 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.791840 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a9c75345-8af5-49da-bd74-3fd013a2bafd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a9c75345-8af5-49da-bd74-3fd013a2bafd\") pod \"logging-loki-ingester-0\" (UID: \"775ca902-fd03-4191-9440-ea598768d4e6\") " pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.796724 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.821642 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.827694 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.844542 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.860203 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/6975f95f-884b-4952-8bf8-0d18537e3403-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.860572 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c95492d6-57e6-4336-afce-f3d2d1a9a88d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c95492d6-57e6-4336-afce-f3d2d1a9a88d\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.860635 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6975f95f-884b-4952-8bf8-0d18537e3403-config\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.860662 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shzlw\" (UniqueName: \"kubernetes.io/projected/6975f95f-884b-4952-8bf8-0d18537e3403-kube-api-access-shzlw\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.860720 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/6975f95f-884b-4952-8bf8-0d18537e3403-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.860750 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6975f95f-884b-4952-8bf8-0d18537e3403-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.860791 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/6975f95f-884b-4952-8bf8-0d18537e3403-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.861781 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6975f95f-884b-4952-8bf8-0d18537e3403-config\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.862012 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6975f95f-884b-4952-8bf8-0d18537e3403-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.863157 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.863198 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c95492d6-57e6-4336-afce-f3d2d1a9a88d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c95492d6-57e6-4336-afce-f3d2d1a9a88d\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/24180df4643c6a16eed522d6b0ea5a8e9075be778452dd4ea758fdd573b59001/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.864389 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/6975f95f-884b-4952-8bf8-0d18537e3403-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.867596 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/6975f95f-884b-4952-8bf8-0d18537e3403-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.868785 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/6975f95f-884b-4952-8bf8-0d18537e3403-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.877147 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shzlw\" (UniqueName: \"kubernetes.io/projected/6975f95f-884b-4952-8bf8-0d18537e3403-kube-api-access-shzlw\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.895656 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c95492d6-57e6-4336-afce-f3d2d1a9a88d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c95492d6-57e6-4336-afce-f3d2d1a9a88d\") pod \"logging-loki-compactor-0\" (UID: \"6975f95f-884b-4952-8bf8-0d18537e3403\") " pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.962331 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlcjl\" (UniqueName: \"kubernetes.io/projected/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-kube-api-access-mlcjl\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.962388 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.962541 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2839edae-c7c1-4435-82fc-182943bb1f83\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2839edae-c7c1-4435-82fc-182943bb1f83\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.962668 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.962781 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-config\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.962807 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:14 crc kubenswrapper[4867]: I0214 04:22:14.962998 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.064204 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2839edae-c7c1-4435-82fc-182943bb1f83\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2839edae-c7c1-4435-82fc-182943bb1f83\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.064248 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.064308 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-config\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.064329 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.064376 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.064619 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlcjl\" (UniqueName: \"kubernetes.io/projected/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-kube-api-access-mlcjl\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.064650 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.067879 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.070057 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.070126 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-config\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.071251 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.071286 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2839edae-c7c1-4435-82fc-182943bb1f83\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2839edae-c7c1-4435-82fc-182943bb1f83\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b6dd80fe1b9813ac525647e268fd40f85f3de84eda8cd138bd497820e0ff03be/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.072038 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.080294 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.085546 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlcjl\" (UniqueName: \"kubernetes.io/projected/3c3333e0-ec4e-41bf-8296-9469ad3ac9cd-kube-api-access-mlcjl\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.100032 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2839edae-c7c1-4435-82fc-182943bb1f83\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2839edae-c7c1-4435-82fc-182943bb1f83\") pod \"logging-loki-index-gateway-0\" (UID: \"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.195060 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.291447 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-767ffcbf75-md7ts"] Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.294293 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" event={"ID":"9c48c070-b4b3-48af-b40a-d82788f764d9","Type":"ContainerStarted","Data":"b5dbc7d5851ce0132216b247584b19bd35c9b7580e440b6d7a66ef4521fe7b43"} Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.296956 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" event={"ID":"837b4fe4-f827-4882-8af7-225b18bb3e22","Type":"ContainerStarted","Data":"234ac99a30a2b802e31a96f1f42cb9fed6dc6de9f0592ffe849d8767f95f062b"} Feb 14 04:22:15 crc kubenswrapper[4867]: W0214 04:22:15.302790 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd28844dc_6974_446b_bd9a_b22586858387.slice/crio-907d6b304fac7f3f885ae186c6c57be5e30a63f0514f4475a8b2ab889c76398b WatchSource:0}: Error finding container 907d6b304fac7f3f885ae186c6c57be5e30a63f0514f4475a8b2ab889c76398b: Status 404 returned error can't find the container with id 907d6b304fac7f3f885ae186c6c57be5e30a63f0514f4475a8b2ab889c76398b Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.345583 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Feb 14 04:22:15 crc kubenswrapper[4867]: W0214 04:22:15.346995 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod775ca902_fd03_4191_9440_ea598768d4e6.slice/crio-98e2355e61cf5f15175d1f160c47ae329fa7da7e652a90d2841336fb51d86aa2 WatchSource:0}: Error finding container 98e2355e61cf5f15175d1f160c47ae329fa7da7e652a90d2841336fb51d86aa2: Status 404 returned error can't find the container with id 98e2355e61cf5f15175d1f160c47ae329fa7da7e652a90d2841336fb51d86aa2 Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.397226 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.403048 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-767ffcbf75-l82l4"] Feb 14 04:22:15 crc kubenswrapper[4867]: W0214 04:22:15.405709 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c1f86e8_fb7b_40a7_9cc7_07bc9aa74ce5.slice/crio-907aed794e07145bfef053123ca8d749decb53bed5bced57d31ce3fd0b0e57ee WatchSource:0}: Error finding container 907aed794e07145bfef053123ca8d749decb53bed5bced57d31ce3fd0b0e57ee: Status 404 returned error can't find the container with id 907aed794e07145bfef053123ca8d749decb53bed5bced57d31ce3fd0b0e57ee Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.659923 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Feb 14 04:22:15 crc kubenswrapper[4867]: I0214 04:22:15.822879 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Feb 14 04:22:15 crc kubenswrapper[4867]: W0214 04:22:15.831372 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c3333e0_ec4e_41bf_8296_9469ad3ac9cd.slice/crio-f113dcbafbace35f21d5c6191aa68d4a89b791daca06fc59b0739a1cac749997 WatchSource:0}: Error finding container f113dcbafbace35f21d5c6191aa68d4a89b791daca06fc59b0739a1cac749997: Status 404 returned error can't find the container with id f113dcbafbace35f21d5c6191aa68d4a89b791daca06fc59b0739a1cac749997 Feb 14 04:22:16 crc kubenswrapper[4867]: I0214 04:22:16.304273 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"6975f95f-884b-4952-8bf8-0d18537e3403","Type":"ContainerStarted","Data":"ccb2aaf0f62e18390459fa7694a18b237133ff0a92fd3eb37c5f2dc22a0a5e3b"} Feb 14 04:22:16 crc kubenswrapper[4867]: I0214 04:22:16.305780 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" event={"ID":"d28844dc-6974-446b-bd9a-b22586858387","Type":"ContainerStarted","Data":"907d6b304fac7f3f885ae186c6c57be5e30a63f0514f4475a8b2ab889c76398b"} Feb 14 04:22:16 crc kubenswrapper[4867]: I0214 04:22:16.306969 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd","Type":"ContainerStarted","Data":"f113dcbafbace35f21d5c6191aa68d4a89b791daca06fc59b0739a1cac749997"} Feb 14 04:22:16 crc kubenswrapper[4867]: I0214 04:22:16.308705 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"775ca902-fd03-4191-9440-ea598768d4e6","Type":"ContainerStarted","Data":"98e2355e61cf5f15175d1f160c47ae329fa7da7e652a90d2841336fb51d86aa2"} Feb 14 04:22:16 crc kubenswrapper[4867]: I0214 04:22:16.309868 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" event={"ID":"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5","Type":"ContainerStarted","Data":"907aed794e07145bfef053123ca8d749decb53bed5bced57d31ce3fd0b0e57ee"} Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.345021 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"775ca902-fd03-4191-9440-ea598768d4e6","Type":"ContainerStarted","Data":"169ea66b1988e22285b262d54bcbc4608cacdd0fb3c9b28f6847dfac5ebc59df"} Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.346669 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.348843 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" event={"ID":"837b4fe4-f827-4882-8af7-225b18bb3e22","Type":"ContainerStarted","Data":"d70bb07fbdd4508db5891d729549052ca61be7cbad3897d210038ece393b3511"} Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.348964 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.351774 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" event={"ID":"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5","Type":"ContainerStarted","Data":"7ba2905855e993272c4b214c140509bda872171927a69642acd3d02ea21861bc"} Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.353900 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" event={"ID":"9c48c070-b4b3-48af-b40a-d82788f764d9","Type":"ContainerStarted","Data":"5abe207a494d942303c556158a2f9a268c5c53bfb9a2421323bba0befcf8d3ce"} Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.354785 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.356487 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"6975f95f-884b-4952-8bf8-0d18537e3403","Type":"ContainerStarted","Data":"8b251bbe031bf6811830e0645ff930876c078e88e55861ac152f5fdd78eca244"} Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.356965 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.358484 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" event={"ID":"c9201352-8585-47d4-9c13-b9e21ac4cd9f","Type":"ContainerStarted","Data":"895ac13ddc5c863fbcaa197af2ca920f7c947310cec88faba4298d72b0c48a52"} Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.358912 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.360356 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" event={"ID":"d28844dc-6974-446b-bd9a-b22586858387","Type":"ContainerStarted","Data":"bbc0c912e4d0d5cba98c93f5d6b482101035594d917d9228e2d79b3bbaaa5652"} Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.362027 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"3c3333e0-ec4e-41bf-8296-9469ad3ac9cd","Type":"ContainerStarted","Data":"5d686ab1322425ee880bef6b00c46db7139b5114d527e4b634095da9caacea96"} Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.362634 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.379455 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=2.914532866 podStartE2EDuration="7.379434284s" podCreationTimestamp="2026-02-14 04:22:13 +0000 UTC" firstStartedPulling="2026-02-14 04:22:15.349923721 +0000 UTC m=+767.430861035" lastFinishedPulling="2026-02-14 04:22:19.814825139 +0000 UTC m=+771.895762453" observedRunningTime="2026-02-14 04:22:20.370877782 +0000 UTC m=+772.451815106" watchObservedRunningTime="2026-02-14 04:22:20.379434284 +0000 UTC m=+772.460371608" Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.389876 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" podStartSLOduration=2.053994401 podStartE2EDuration="7.389856043s" podCreationTimestamp="2026-02-14 04:22:13 +0000 UTC" firstStartedPulling="2026-02-14 04:22:14.473409444 +0000 UTC m=+766.554346758" lastFinishedPulling="2026-02-14 04:22:19.809271066 +0000 UTC m=+771.890208400" observedRunningTime="2026-02-14 04:22:20.388761085 +0000 UTC m=+772.469698399" watchObservedRunningTime="2026-02-14 04:22:20.389856043 +0000 UTC m=+772.470793367" Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.416555 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" podStartSLOduration=1.778248984 podStartE2EDuration="7.416533503s" podCreationTimestamp="2026-02-14 04:22:13 +0000 UTC" firstStartedPulling="2026-02-14 04:22:14.102475796 +0000 UTC m=+766.183413110" lastFinishedPulling="2026-02-14 04:22:19.740760315 +0000 UTC m=+771.821697629" observedRunningTime="2026-02-14 04:22:20.412009726 +0000 UTC m=+772.492947070" watchObservedRunningTime="2026-02-14 04:22:20.416533503 +0000 UTC m=+772.497470817" Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.452685 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" podStartSLOduration=2.292465225 podStartE2EDuration="7.452663476s" podCreationTimestamp="2026-02-14 04:22:13 +0000 UTC" firstStartedPulling="2026-02-14 04:22:14.646983721 +0000 UTC m=+766.727921035" lastFinishedPulling="2026-02-14 04:22:19.807181972 +0000 UTC m=+771.888119286" observedRunningTime="2026-02-14 04:22:20.447221056 +0000 UTC m=+772.528158360" watchObservedRunningTime="2026-02-14 04:22:20.452663476 +0000 UTC m=+772.533600780" Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.476193 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.344414766 podStartE2EDuration="7.476177064s" podCreationTimestamp="2026-02-14 04:22:13 +0000 UTC" firstStartedPulling="2026-02-14 04:22:15.684821347 +0000 UTC m=+767.765758661" lastFinishedPulling="2026-02-14 04:22:19.816583645 +0000 UTC m=+771.897520959" observedRunningTime="2026-02-14 04:22:20.474822899 +0000 UTC m=+772.555760213" watchObservedRunningTime="2026-02-14 04:22:20.476177064 +0000 UTC m=+772.557114368" Feb 14 04:22:20 crc kubenswrapper[4867]: I0214 04:22:20.508469 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.502843772 podStartE2EDuration="7.508446668s" podCreationTimestamp="2026-02-14 04:22:13 +0000 UTC" firstStartedPulling="2026-02-14 04:22:15.833675425 +0000 UTC m=+767.914612739" lastFinishedPulling="2026-02-14 04:22:19.839278321 +0000 UTC m=+771.920215635" observedRunningTime="2026-02-14 04:22:20.503846809 +0000 UTC m=+772.584784123" watchObservedRunningTime="2026-02-14 04:22:20.508446668 +0000 UTC m=+772.589383982" Feb 14 04:22:22 crc kubenswrapper[4867]: I0214 04:22:22.378957 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" event={"ID":"d28844dc-6974-446b-bd9a-b22586858387","Type":"ContainerStarted","Data":"6a006b19e56e3cf92b6649207f18201d86cdee688ceac33c20505054bb27deb4"} Feb 14 04:22:22 crc kubenswrapper[4867]: I0214 04:22:22.380212 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:22 crc kubenswrapper[4867]: I0214 04:22:22.380259 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:22 crc kubenswrapper[4867]: I0214 04:22:22.382873 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-md7ts container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": dial tcp 10.217.0.54:8083: connect: connection refused" start-of-body= Feb 14 04:22:22 crc kubenswrapper[4867]: I0214 04:22:22.382970 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" podUID="d28844dc-6974-446b-bd9a-b22586858387" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": dial tcp 10.217.0.54:8083: connect: connection refused" Feb 14 04:22:22 crc kubenswrapper[4867]: I0214 04:22:22.400311 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:22 crc kubenswrapper[4867]: I0214 04:22:22.406188 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" podStartSLOduration=2.52474702 podStartE2EDuration="9.406167302s" podCreationTimestamp="2026-02-14 04:22:13 +0000 UTC" firstStartedPulling="2026-02-14 04:22:15.316484256 +0000 UTC m=+767.397421580" lastFinishedPulling="2026-02-14 04:22:22.197904548 +0000 UTC m=+774.278841862" observedRunningTime="2026-02-14 04:22:22.40185207 +0000 UTC m=+774.482789384" watchObservedRunningTime="2026-02-14 04:22:22.406167302 +0000 UTC m=+774.487104626" Feb 14 04:22:23 crc kubenswrapper[4867]: I0214 04:22:23.404871 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" event={"ID":"0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5","Type":"ContainerStarted","Data":"7408fe839eaef5d59649618802b010c8092e9a7dffe1ac25d667580b82d9b2e6"} Feb 14 04:22:23 crc kubenswrapper[4867]: I0214 04:22:23.411477 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" Feb 14 04:22:23 crc kubenswrapper[4867]: I0214 04:22:23.435207 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" podStartSLOduration=3.658736251 podStartE2EDuration="10.435181019s" podCreationTimestamp="2026-02-14 04:22:13 +0000 UTC" firstStartedPulling="2026-02-14 04:22:15.414218763 +0000 UTC m=+767.495156077" lastFinishedPulling="2026-02-14 04:22:22.190663531 +0000 UTC m=+774.271600845" observedRunningTime="2026-02-14 04:22:23.423723403 +0000 UTC m=+775.504660767" watchObservedRunningTime="2026-02-14 04:22:23.435181019 +0000 UTC m=+775.516118333" Feb 14 04:22:24 crc kubenswrapper[4867]: I0214 04:22:24.415860 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:24 crc kubenswrapper[4867]: I0214 04:22:24.416321 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:24 crc kubenswrapper[4867]: I0214 04:22:24.426332 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:24 crc kubenswrapper[4867]: I0214 04:22:24.430142 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" Feb 14 04:22:31 crc kubenswrapper[4867]: I0214 04:22:31.251501 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:22:31 crc kubenswrapper[4867]: I0214 04:22:31.252381 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:22:35 crc kubenswrapper[4867]: I0214 04:22:35.208224 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Feb 14 04:22:35 crc kubenswrapper[4867]: I0214 04:22:35.407137 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Feb 14 04:22:43 crc kubenswrapper[4867]: I0214 04:22:43.590189 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" Feb 14 04:22:43 crc kubenswrapper[4867]: I0214 04:22:43.892042 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" Feb 14 04:22:44 crc kubenswrapper[4867]: I0214 04:22:44.077969 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" Feb 14 04:22:44 crc kubenswrapper[4867]: I0214 04:22:44.835347 4867 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Feb 14 04:22:44 crc kubenswrapper[4867]: I0214 04:22:44.835483 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="775ca902-fd03-4191-9440-ea598768d4e6" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 14 04:22:54 crc kubenswrapper[4867]: I0214 04:22:54.827987 4867 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 14 04:22:54 crc kubenswrapper[4867]: I0214 04:22:54.828883 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="775ca902-fd03-4191-9440-ea598768d4e6" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 14 04:23:01 crc kubenswrapper[4867]: I0214 04:23:01.251573 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:23:01 crc kubenswrapper[4867]: I0214 04:23:01.252654 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:23:01 crc kubenswrapper[4867]: I0214 04:23:01.252738 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:23:01 crc kubenswrapper[4867]: I0214 04:23:01.253918 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"51f114f48cb9a2cff6d859aa7aea42ea438df249b54ac2cc89b9fb1c0a39a59a"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 04:23:01 crc kubenswrapper[4867]: I0214 04:23:01.254000 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://51f114f48cb9a2cff6d859aa7aea42ea438df249b54ac2cc89b9fb1c0a39a59a" gracePeriod=600 Feb 14 04:23:01 crc kubenswrapper[4867]: I0214 04:23:01.747912 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="51f114f48cb9a2cff6d859aa7aea42ea438df249b54ac2cc89b9fb1c0a39a59a" exitCode=0 Feb 14 04:23:01 crc kubenswrapper[4867]: I0214 04:23:01.747987 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"51f114f48cb9a2cff6d859aa7aea42ea438df249b54ac2cc89b9fb1c0a39a59a"} Feb 14 04:23:01 crc kubenswrapper[4867]: I0214 04:23:01.748320 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"3ce87267e4cadbd1bac903bbe9da7eec07159552420bcd52dda15fc535f1ace5"} Feb 14 04:23:01 crc kubenswrapper[4867]: I0214 04:23:01.748347 4867 scope.go:117] "RemoveContainer" containerID="2de3d61c1f6c01b61b6559aa8687b810bcfdab61e971db1007a35ef4d563c645" Feb 14 04:23:04 crc kubenswrapper[4867]: I0214 04:23:04.829387 4867 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Feb 14 04:23:04 crc kubenswrapper[4867]: I0214 04:23:04.830629 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="775ca902-fd03-4191-9440-ea598768d4e6" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 14 04:23:14 crc kubenswrapper[4867]: I0214 04:23:14.830562 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.511442 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-9wcmp"] Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.513333 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.516848 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.517029 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.517148 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.517991 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-zjsbd" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.518976 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.526760 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-9wcmp"] Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.529123 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.659770 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-sa-token\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.659976 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-trusted-ca\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.660049 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-config-openshift-service-cacrt\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.660092 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbnd4\" (UniqueName: \"kubernetes.io/projected/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-kube-api-access-kbnd4\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.660145 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-entrypoint\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.660243 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-collector-token\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.660275 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-metrics\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.660292 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-tmp\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.660415 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-config\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.660458 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-datadir\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.660488 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-collector-syslog-receiver\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.672094 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-9wcmp"] Feb 14 04:23:32 crc kubenswrapper[4867]: E0214 04:23:32.672805 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-kbnd4 metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-9wcmp" podUID="a2144ced-e8cb-4b28-82f2-65e8dbd4688f" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.762024 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-entrypoint\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.762102 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-collector-token\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.762141 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-metrics\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.762162 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-tmp\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.762220 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-config\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.762244 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-datadir\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.762268 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-collector-syslog-receiver\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.762316 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-sa-token\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: E0214 04:23:32.762333 4867 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.762387 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-datadir\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: E0214 04:23:32.762415 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-metrics podName:a2144ced-e8cb-4b28-82f2-65e8dbd4688f nodeName:}" failed. No retries permitted until 2026-02-14 04:23:33.262394946 +0000 UTC m=+845.343332270 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-metrics") pod "collector-9wcmp" (UID: "a2144ced-e8cb-4b28-82f2-65e8dbd4688f") : secret "collector-metrics" not found Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.763204 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-trusted-ca\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.763247 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-config-openshift-service-cacrt\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.763285 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbnd4\" (UniqueName: \"kubernetes.io/projected/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-kube-api-access-kbnd4\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.763801 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-config\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.763943 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-config-openshift-service-cacrt\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.764164 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-entrypoint\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.764311 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-trusted-ca\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.768208 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-tmp\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.768433 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-collector-token\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.769698 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-collector-syslog-receiver\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.783789 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-sa-token\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:32 crc kubenswrapper[4867]: I0214 04:23:32.784730 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbnd4\" (UniqueName: \"kubernetes.io/projected/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-kube-api-access-kbnd4\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.026959 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-9wcmp" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.036754 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-9wcmp" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.167833 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-config\") pod \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.167905 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-sa-token\") pod \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.167986 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-config-openshift-service-cacrt\") pod \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.168024 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-collector-syslog-receiver\") pod \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.168050 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-entrypoint\") pod \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.168075 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-trusted-ca\") pod \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.168151 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-tmp\") pod \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.168181 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-collector-token\") pod \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.168211 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbnd4\" (UniqueName: \"kubernetes.io/projected/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-kube-api-access-kbnd4\") pod \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.168247 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-datadir\") pod \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.168787 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-config" (OuterVolumeSpecName: "config") pod "a2144ced-e8cb-4b28-82f2-65e8dbd4688f" (UID: "a2144ced-e8cb-4b28-82f2-65e8dbd4688f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.168925 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-datadir" (OuterVolumeSpecName: "datadir") pod "a2144ced-e8cb-4b28-82f2-65e8dbd4688f" (UID: "a2144ced-e8cb-4b28-82f2-65e8dbd4688f"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.169397 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "a2144ced-e8cb-4b28-82f2-65e8dbd4688f" (UID: "a2144ced-e8cb-4b28-82f2-65e8dbd4688f"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.169439 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "a2144ced-e8cb-4b28-82f2-65e8dbd4688f" (UID: "a2144ced-e8cb-4b28-82f2-65e8dbd4688f"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.169979 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a2144ced-e8cb-4b28-82f2-65e8dbd4688f" (UID: "a2144ced-e8cb-4b28-82f2-65e8dbd4688f"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.170061 4867 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.170748 4867 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-entrypoint\") on node \"crc\" DevicePath \"\"" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.170786 4867 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-datadir\") on node \"crc\" DevicePath \"\"" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.170800 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.172881 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "a2144ced-e8cb-4b28-82f2-65e8dbd4688f" (UID: "a2144ced-e8cb-4b28-82f2-65e8dbd4688f"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.173078 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-kube-api-access-kbnd4" (OuterVolumeSpecName: "kube-api-access-kbnd4") pod "a2144ced-e8cb-4b28-82f2-65e8dbd4688f" (UID: "a2144ced-e8cb-4b28-82f2-65e8dbd4688f"). InnerVolumeSpecName "kube-api-access-kbnd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.173342 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-sa-token" (OuterVolumeSpecName: "sa-token") pod "a2144ced-e8cb-4b28-82f2-65e8dbd4688f" (UID: "a2144ced-e8cb-4b28-82f2-65e8dbd4688f"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.174377 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-collector-token" (OuterVolumeSpecName: "collector-token") pod "a2144ced-e8cb-4b28-82f2-65e8dbd4688f" (UID: "a2144ced-e8cb-4b28-82f2-65e8dbd4688f"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.178694 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-tmp" (OuterVolumeSpecName: "tmp") pod "a2144ced-e8cb-4b28-82f2-65e8dbd4688f" (UID: "a2144ced-e8cb-4b28-82f2-65e8dbd4688f"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.272236 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-metrics\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.272565 4867 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-collector-token\") on node \"crc\" DevicePath \"\"" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.272584 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbnd4\" (UniqueName: \"kubernetes.io/projected/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-kube-api-access-kbnd4\") on node \"crc\" DevicePath \"\"" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.272600 4867 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-sa-token\") on node \"crc\" DevicePath \"\"" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.272613 4867 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.272624 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.272636 4867 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-tmp\") on node \"crc\" DevicePath \"\"" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.276365 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-metrics\") pod \"collector-9wcmp\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " pod="openshift-logging/collector-9wcmp" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.373649 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-metrics\") pod \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\" (UID: \"a2144ced-e8cb-4b28-82f2-65e8dbd4688f\") " Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.376302 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-metrics" (OuterVolumeSpecName: "metrics") pod "a2144ced-e8cb-4b28-82f2-65e8dbd4688f" (UID: "a2144ced-e8cb-4b28-82f2-65e8dbd4688f"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:23:33 crc kubenswrapper[4867]: I0214 04:23:33.475625 4867 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/a2144ced-e8cb-4b28-82f2-65e8dbd4688f-metrics\") on node \"crc\" DevicePath \"\"" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.036721 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-9wcmp" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.118217 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-9wcmp"] Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.124272 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-9wcmp"] Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.129156 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-4tm7t"] Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.130080 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.136252 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.136318 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.136259 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-zjsbd" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.136660 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.136789 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.144556 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.146903 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-4tm7t"] Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.289034 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-sa-token\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.289087 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-collector-token\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.289129 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-metrics\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.289161 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-config\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.289181 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbgkj\" (UniqueName: \"kubernetes.io/projected/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-kube-api-access-zbgkj\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.289205 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-datadir\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.289219 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-tmp\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.289252 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-config-openshift-service-cacrt\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.289270 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-entrypoint\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.289299 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-trusted-ca\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.289325 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-collector-syslog-receiver\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.390740 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-sa-token\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.390781 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-collector-token\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.390824 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-metrics\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.390852 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-config\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.390869 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbgkj\" (UniqueName: \"kubernetes.io/projected/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-kube-api-access-zbgkj\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.390890 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-datadir\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.390906 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-tmp\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.390934 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-config-openshift-service-cacrt\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.390952 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-entrypoint\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.390986 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-trusted-ca\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.391013 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-collector-syslog-receiver\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.391583 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-datadir\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.392337 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-entrypoint\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.392399 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-config\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.392627 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-trusted-ca\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.392716 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-config-openshift-service-cacrt\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.395443 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-collector-token\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.398779 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-tmp\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.398835 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-metrics\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.403062 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-collector-syslog-receiver\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.407645 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbgkj\" (UniqueName: \"kubernetes.io/projected/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-kube-api-access-zbgkj\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.407734 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/0b309a8c-060a-4e8b-9731-3c4c3aab56f7-sa-token\") pod \"collector-4tm7t\" (UID: \"0b309a8c-060a-4e8b-9731-3c4c3aab56f7\") " pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.448573 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-4tm7t" Feb 14 04:23:34 crc kubenswrapper[4867]: I0214 04:23:34.966531 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-4tm7t"] Feb 14 04:23:34 crc kubenswrapper[4867]: W0214 04:23:34.978658 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b309a8c_060a_4e8b_9731_3c4c3aab56f7.slice/crio-94df3171a232a76ec3943d5da2c7d86c0f99fb1eb5fdeed528545e2a39454ca0 WatchSource:0}: Error finding container 94df3171a232a76ec3943d5da2c7d86c0f99fb1eb5fdeed528545e2a39454ca0: Status 404 returned error can't find the container with id 94df3171a232a76ec3943d5da2c7d86c0f99fb1eb5fdeed528545e2a39454ca0 Feb 14 04:23:35 crc kubenswrapper[4867]: I0214 04:23:35.014646 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2144ced-e8cb-4b28-82f2-65e8dbd4688f" path="/var/lib/kubelet/pods/a2144ced-e8cb-4b28-82f2-65e8dbd4688f/volumes" Feb 14 04:23:35 crc kubenswrapper[4867]: I0214 04:23:35.045721 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-4tm7t" event={"ID":"0b309a8c-060a-4e8b-9731-3c4c3aab56f7","Type":"ContainerStarted","Data":"94df3171a232a76ec3943d5da2c7d86c0f99fb1eb5fdeed528545e2a39454ca0"} Feb 14 04:23:42 crc kubenswrapper[4867]: I0214 04:23:42.109441 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-4tm7t" event={"ID":"0b309a8c-060a-4e8b-9731-3c4c3aab56f7","Type":"ContainerStarted","Data":"cfc255139d34f5006f0cf92f0c59e4813687cfe1a16dab3d8448096c2259ec0c"} Feb 14 04:23:42 crc kubenswrapper[4867]: I0214 04:23:42.129375 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-4tm7t" podStartSLOduration=1.531214951 podStartE2EDuration="8.129354063s" podCreationTimestamp="2026-02-14 04:23:34 +0000 UTC" firstStartedPulling="2026-02-14 04:23:34.982085641 +0000 UTC m=+847.063022995" lastFinishedPulling="2026-02-14 04:23:41.580224793 +0000 UTC m=+853.661162107" observedRunningTime="2026-02-14 04:23:42.128954203 +0000 UTC m=+854.209891517" watchObservedRunningTime="2026-02-14 04:23:42.129354063 +0000 UTC m=+854.210291377" Feb 14 04:24:10 crc kubenswrapper[4867]: I0214 04:24:10.965388 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb"] Feb 14 04:24:10 crc kubenswrapper[4867]: I0214 04:24:10.967717 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" Feb 14 04:24:10 crc kubenswrapper[4867]: I0214 04:24:10.969862 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 14 04:24:10 crc kubenswrapper[4867]: I0214 04:24:10.990708 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb"] Feb 14 04:24:11 crc kubenswrapper[4867]: I0214 04:24:11.129812 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2sb7\" (UniqueName: \"kubernetes.io/projected/10159ab6-8862-4a8a-afd2-3fb5920f2cae-kube-api-access-c2sb7\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb\" (UID: \"10159ab6-8862-4a8a-afd2-3fb5920f2cae\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" Feb 14 04:24:11 crc kubenswrapper[4867]: I0214 04:24:11.129891 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10159ab6-8862-4a8a-afd2-3fb5920f2cae-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb\" (UID: \"10159ab6-8862-4a8a-afd2-3fb5920f2cae\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" Feb 14 04:24:11 crc kubenswrapper[4867]: I0214 04:24:11.129954 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10159ab6-8862-4a8a-afd2-3fb5920f2cae-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb\" (UID: \"10159ab6-8862-4a8a-afd2-3fb5920f2cae\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" Feb 14 04:24:11 crc kubenswrapper[4867]: I0214 04:24:11.231680 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10159ab6-8862-4a8a-afd2-3fb5920f2cae-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb\" (UID: \"10159ab6-8862-4a8a-afd2-3fb5920f2cae\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" Feb 14 04:24:11 crc kubenswrapper[4867]: I0214 04:24:11.231759 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10159ab6-8862-4a8a-afd2-3fb5920f2cae-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb\" (UID: \"10159ab6-8862-4a8a-afd2-3fb5920f2cae\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" Feb 14 04:24:11 crc kubenswrapper[4867]: I0214 04:24:11.231820 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2sb7\" (UniqueName: \"kubernetes.io/projected/10159ab6-8862-4a8a-afd2-3fb5920f2cae-kube-api-access-c2sb7\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb\" (UID: \"10159ab6-8862-4a8a-afd2-3fb5920f2cae\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" Feb 14 04:24:11 crc kubenswrapper[4867]: I0214 04:24:11.232282 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10159ab6-8862-4a8a-afd2-3fb5920f2cae-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb\" (UID: \"10159ab6-8862-4a8a-afd2-3fb5920f2cae\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" Feb 14 04:24:11 crc kubenswrapper[4867]: I0214 04:24:11.232322 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10159ab6-8862-4a8a-afd2-3fb5920f2cae-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb\" (UID: \"10159ab6-8862-4a8a-afd2-3fb5920f2cae\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" Feb 14 04:24:11 crc kubenswrapper[4867]: I0214 04:24:11.249692 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2sb7\" (UniqueName: \"kubernetes.io/projected/10159ab6-8862-4a8a-afd2-3fb5920f2cae-kube-api-access-c2sb7\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb\" (UID: \"10159ab6-8862-4a8a-afd2-3fb5920f2cae\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" Feb 14 04:24:11 crc kubenswrapper[4867]: I0214 04:24:11.282444 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" Feb 14 04:24:11 crc kubenswrapper[4867]: I0214 04:24:11.830318 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb"] Feb 14 04:24:12 crc kubenswrapper[4867]: I0214 04:24:12.384109 4867 generic.go:334] "Generic (PLEG): container finished" podID="10159ab6-8862-4a8a-afd2-3fb5920f2cae" containerID="4207d38a5fa1fca3e108eb003826a071655e3828ac35f556609321943c1c2c47" exitCode=0 Feb 14 04:24:12 crc kubenswrapper[4867]: I0214 04:24:12.384366 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" event={"ID":"10159ab6-8862-4a8a-afd2-3fb5920f2cae","Type":"ContainerDied","Data":"4207d38a5fa1fca3e108eb003826a071655e3828ac35f556609321943c1c2c47"} Feb 14 04:24:12 crc kubenswrapper[4867]: I0214 04:24:12.384392 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" event={"ID":"10159ab6-8862-4a8a-afd2-3fb5920f2cae","Type":"ContainerStarted","Data":"988acb574b275ef9c7560746b8921c03fcb97d3fa46d8c5aa6fea99f5187d294"} Feb 14 04:24:13 crc kubenswrapper[4867]: I0214 04:24:13.184602 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t9pkj"] Feb 14 04:24:13 crc kubenswrapper[4867]: I0214 04:24:13.187794 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:13 crc kubenswrapper[4867]: I0214 04:24:13.199625 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t9pkj"] Feb 14 04:24:13 crc kubenswrapper[4867]: I0214 04:24:13.366688 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ad1164b-e852-484b-b290-6d32e24d3d8e-utilities\") pod \"redhat-operators-t9pkj\" (UID: \"5ad1164b-e852-484b-b290-6d32e24d3d8e\") " pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:13 crc kubenswrapper[4867]: I0214 04:24:13.367280 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2z7g\" (UniqueName: \"kubernetes.io/projected/5ad1164b-e852-484b-b290-6d32e24d3d8e-kube-api-access-p2z7g\") pod \"redhat-operators-t9pkj\" (UID: \"5ad1164b-e852-484b-b290-6d32e24d3d8e\") " pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:13 crc kubenswrapper[4867]: I0214 04:24:13.367475 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ad1164b-e852-484b-b290-6d32e24d3d8e-catalog-content\") pod \"redhat-operators-t9pkj\" (UID: \"5ad1164b-e852-484b-b290-6d32e24d3d8e\") " pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:13 crc kubenswrapper[4867]: I0214 04:24:13.469190 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ad1164b-e852-484b-b290-6d32e24d3d8e-catalog-content\") pod \"redhat-operators-t9pkj\" (UID: \"5ad1164b-e852-484b-b290-6d32e24d3d8e\") " pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:13 crc kubenswrapper[4867]: I0214 04:24:13.469277 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ad1164b-e852-484b-b290-6d32e24d3d8e-utilities\") pod \"redhat-operators-t9pkj\" (UID: \"5ad1164b-e852-484b-b290-6d32e24d3d8e\") " pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:13 crc kubenswrapper[4867]: I0214 04:24:13.469369 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2z7g\" (UniqueName: \"kubernetes.io/projected/5ad1164b-e852-484b-b290-6d32e24d3d8e-kube-api-access-p2z7g\") pod \"redhat-operators-t9pkj\" (UID: \"5ad1164b-e852-484b-b290-6d32e24d3d8e\") " pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:13 crc kubenswrapper[4867]: I0214 04:24:13.469793 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ad1164b-e852-484b-b290-6d32e24d3d8e-catalog-content\") pod \"redhat-operators-t9pkj\" (UID: \"5ad1164b-e852-484b-b290-6d32e24d3d8e\") " pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:13 crc kubenswrapper[4867]: I0214 04:24:13.470006 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ad1164b-e852-484b-b290-6d32e24d3d8e-utilities\") pod \"redhat-operators-t9pkj\" (UID: \"5ad1164b-e852-484b-b290-6d32e24d3d8e\") " pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:13 crc kubenswrapper[4867]: I0214 04:24:13.493157 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2z7g\" (UniqueName: \"kubernetes.io/projected/5ad1164b-e852-484b-b290-6d32e24d3d8e-kube-api-access-p2z7g\") pod \"redhat-operators-t9pkj\" (UID: \"5ad1164b-e852-484b-b290-6d32e24d3d8e\") " pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:13 crc kubenswrapper[4867]: I0214 04:24:13.506987 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:13 crc kubenswrapper[4867]: I0214 04:24:13.992480 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t9pkj"] Feb 14 04:24:14 crc kubenswrapper[4867]: I0214 04:24:14.400428 4867 generic.go:334] "Generic (PLEG): container finished" podID="5ad1164b-e852-484b-b290-6d32e24d3d8e" containerID="59213eb5738330920a318fc901360e949cd08966f19e4aeca5c0862bcdd388f1" exitCode=0 Feb 14 04:24:14 crc kubenswrapper[4867]: I0214 04:24:14.400471 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9pkj" event={"ID":"5ad1164b-e852-484b-b290-6d32e24d3d8e","Type":"ContainerDied","Data":"59213eb5738330920a318fc901360e949cd08966f19e4aeca5c0862bcdd388f1"} Feb 14 04:24:14 crc kubenswrapper[4867]: I0214 04:24:14.400538 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9pkj" event={"ID":"5ad1164b-e852-484b-b290-6d32e24d3d8e","Type":"ContainerStarted","Data":"0d2d792529598a3e2dcb124798b2124c8ff8af3c6b135bf970fd763b2135679b"} Feb 14 04:24:14 crc kubenswrapper[4867]: I0214 04:24:14.404072 4867 generic.go:334] "Generic (PLEG): container finished" podID="10159ab6-8862-4a8a-afd2-3fb5920f2cae" containerID="a64861927453dca5f17bbb20043fda3e88d8a529d848ca7e5278cd400e0c0eb0" exitCode=0 Feb 14 04:24:14 crc kubenswrapper[4867]: I0214 04:24:14.404145 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" event={"ID":"10159ab6-8862-4a8a-afd2-3fb5920f2cae","Type":"ContainerDied","Data":"a64861927453dca5f17bbb20043fda3e88d8a529d848ca7e5278cd400e0c0eb0"} Feb 14 04:24:15 crc kubenswrapper[4867]: I0214 04:24:15.428829 4867 generic.go:334] "Generic (PLEG): container finished" podID="10159ab6-8862-4a8a-afd2-3fb5920f2cae" containerID="c5fa47781c87791c7e8f1959a10cc57347dbb2a20e8a17b099544e91349440e7" exitCode=0 Feb 14 04:24:15 crc kubenswrapper[4867]: I0214 04:24:15.429095 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" event={"ID":"10159ab6-8862-4a8a-afd2-3fb5920f2cae","Type":"ContainerDied","Data":"c5fa47781c87791c7e8f1959a10cc57347dbb2a20e8a17b099544e91349440e7"} Feb 14 04:24:15 crc kubenswrapper[4867]: I0214 04:24:15.432649 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9pkj" event={"ID":"5ad1164b-e852-484b-b290-6d32e24d3d8e","Type":"ContainerStarted","Data":"d771e48ac93f52501b02ff902419db985dbdc75b66d985499f576fcba9d2c8cf"} Feb 14 04:24:17 crc kubenswrapper[4867]: I0214 04:24:17.117246 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" Feb 14 04:24:17 crc kubenswrapper[4867]: I0214 04:24:17.231436 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10159ab6-8862-4a8a-afd2-3fb5920f2cae-util\") pod \"10159ab6-8862-4a8a-afd2-3fb5920f2cae\" (UID: \"10159ab6-8862-4a8a-afd2-3fb5920f2cae\") " Feb 14 04:24:17 crc kubenswrapper[4867]: I0214 04:24:17.231536 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2sb7\" (UniqueName: \"kubernetes.io/projected/10159ab6-8862-4a8a-afd2-3fb5920f2cae-kube-api-access-c2sb7\") pod \"10159ab6-8862-4a8a-afd2-3fb5920f2cae\" (UID: \"10159ab6-8862-4a8a-afd2-3fb5920f2cae\") " Feb 14 04:24:17 crc kubenswrapper[4867]: I0214 04:24:17.231615 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10159ab6-8862-4a8a-afd2-3fb5920f2cae-bundle\") pod \"10159ab6-8862-4a8a-afd2-3fb5920f2cae\" (UID: \"10159ab6-8862-4a8a-afd2-3fb5920f2cae\") " Feb 14 04:24:17 crc kubenswrapper[4867]: I0214 04:24:17.232401 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10159ab6-8862-4a8a-afd2-3fb5920f2cae-bundle" (OuterVolumeSpecName: "bundle") pod "10159ab6-8862-4a8a-afd2-3fb5920f2cae" (UID: "10159ab6-8862-4a8a-afd2-3fb5920f2cae"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:24:17 crc kubenswrapper[4867]: I0214 04:24:17.238329 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10159ab6-8862-4a8a-afd2-3fb5920f2cae-kube-api-access-c2sb7" (OuterVolumeSpecName: "kube-api-access-c2sb7") pod "10159ab6-8862-4a8a-afd2-3fb5920f2cae" (UID: "10159ab6-8862-4a8a-afd2-3fb5920f2cae"). InnerVolumeSpecName "kube-api-access-c2sb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:24:17 crc kubenswrapper[4867]: I0214 04:24:17.333377 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2sb7\" (UniqueName: \"kubernetes.io/projected/10159ab6-8862-4a8a-afd2-3fb5920f2cae-kube-api-access-c2sb7\") on node \"crc\" DevicePath \"\"" Feb 14 04:24:17 crc kubenswrapper[4867]: I0214 04:24:17.334121 4867 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10159ab6-8862-4a8a-afd2-3fb5920f2cae-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:24:17 crc kubenswrapper[4867]: I0214 04:24:17.450291 4867 generic.go:334] "Generic (PLEG): container finished" podID="5ad1164b-e852-484b-b290-6d32e24d3d8e" containerID="d771e48ac93f52501b02ff902419db985dbdc75b66d985499f576fcba9d2c8cf" exitCode=0 Feb 14 04:24:17 crc kubenswrapper[4867]: I0214 04:24:17.450364 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9pkj" event={"ID":"5ad1164b-e852-484b-b290-6d32e24d3d8e","Type":"ContainerDied","Data":"d771e48ac93f52501b02ff902419db985dbdc75b66d985499f576fcba9d2c8cf"} Feb 14 04:24:17 crc kubenswrapper[4867]: I0214 04:24:17.452871 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" event={"ID":"10159ab6-8862-4a8a-afd2-3fb5920f2cae","Type":"ContainerDied","Data":"988acb574b275ef9c7560746b8921c03fcb97d3fa46d8c5aa6fea99f5187d294"} Feb 14 04:24:17 crc kubenswrapper[4867]: I0214 04:24:17.452911 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="988acb574b275ef9c7560746b8921c03fcb97d3fa46d8c5aa6fea99f5187d294" Feb 14 04:24:17 crc kubenswrapper[4867]: I0214 04:24:17.452938 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb" Feb 14 04:24:17 crc kubenswrapper[4867]: I0214 04:24:17.456418 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10159ab6-8862-4a8a-afd2-3fb5920f2cae-util" (OuterVolumeSpecName: "util") pod "10159ab6-8862-4a8a-afd2-3fb5920f2cae" (UID: "10159ab6-8862-4a8a-afd2-3fb5920f2cae"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:24:17 crc kubenswrapper[4867]: I0214 04:24:17.537381 4867 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10159ab6-8862-4a8a-afd2-3fb5920f2cae-util\") on node \"crc\" DevicePath \"\"" Feb 14 04:24:18 crc kubenswrapper[4867]: I0214 04:24:18.462483 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9pkj" event={"ID":"5ad1164b-e852-484b-b290-6d32e24d3d8e","Type":"ContainerStarted","Data":"49c660544513666115d86e4b4e0e7ddb150debdf0be5426823aa42b267bda347"} Feb 14 04:24:18 crc kubenswrapper[4867]: I0214 04:24:18.482301 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t9pkj" podStartSLOduration=2.006370509 podStartE2EDuration="5.482283186s" podCreationTimestamp="2026-02-14 04:24:13 +0000 UTC" firstStartedPulling="2026-02-14 04:24:14.401999923 +0000 UTC m=+886.482937237" lastFinishedPulling="2026-02-14 04:24:17.8779126 +0000 UTC m=+889.958849914" observedRunningTime="2026-02-14 04:24:18.482116811 +0000 UTC m=+890.563054125" watchObservedRunningTime="2026-02-14 04:24:18.482283186 +0000 UTC m=+890.563220500" Feb 14 04:24:21 crc kubenswrapper[4867]: I0214 04:24:21.172059 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-tjfgz"] Feb 14 04:24:21 crc kubenswrapper[4867]: E0214 04:24:21.172622 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10159ab6-8862-4a8a-afd2-3fb5920f2cae" containerName="util" Feb 14 04:24:21 crc kubenswrapper[4867]: I0214 04:24:21.172634 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="10159ab6-8862-4a8a-afd2-3fb5920f2cae" containerName="util" Feb 14 04:24:21 crc kubenswrapper[4867]: E0214 04:24:21.172664 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10159ab6-8862-4a8a-afd2-3fb5920f2cae" containerName="pull" Feb 14 04:24:21 crc kubenswrapper[4867]: I0214 04:24:21.172669 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="10159ab6-8862-4a8a-afd2-3fb5920f2cae" containerName="pull" Feb 14 04:24:21 crc kubenswrapper[4867]: E0214 04:24:21.172677 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10159ab6-8862-4a8a-afd2-3fb5920f2cae" containerName="extract" Feb 14 04:24:21 crc kubenswrapper[4867]: I0214 04:24:21.172683 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="10159ab6-8862-4a8a-afd2-3fb5920f2cae" containerName="extract" Feb 14 04:24:21 crc kubenswrapper[4867]: I0214 04:24:21.172813 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="10159ab6-8862-4a8a-afd2-3fb5920f2cae" containerName="extract" Feb 14 04:24:21 crc kubenswrapper[4867]: I0214 04:24:21.173347 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-tjfgz" Feb 14 04:24:21 crc kubenswrapper[4867]: I0214 04:24:21.177953 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-b457g" Feb 14 04:24:21 crc kubenswrapper[4867]: I0214 04:24:21.178005 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 14 04:24:21 crc kubenswrapper[4867]: I0214 04:24:21.177954 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 14 04:24:21 crc kubenswrapper[4867]: I0214 04:24:21.195913 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-tjfgz"] Feb 14 04:24:21 crc kubenswrapper[4867]: I0214 04:24:21.303761 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwthd\" (UniqueName: \"kubernetes.io/projected/914b3f92-c030-4d1e-8454-96a7220f851e-kube-api-access-pwthd\") pod \"nmstate-operator-694c9596b7-tjfgz\" (UID: \"914b3f92-c030-4d1e-8454-96a7220f851e\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-tjfgz" Feb 14 04:24:21 crc kubenswrapper[4867]: I0214 04:24:21.405832 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwthd\" (UniqueName: \"kubernetes.io/projected/914b3f92-c030-4d1e-8454-96a7220f851e-kube-api-access-pwthd\") pod \"nmstate-operator-694c9596b7-tjfgz\" (UID: \"914b3f92-c030-4d1e-8454-96a7220f851e\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-tjfgz" Feb 14 04:24:21 crc kubenswrapper[4867]: I0214 04:24:21.427917 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwthd\" (UniqueName: \"kubernetes.io/projected/914b3f92-c030-4d1e-8454-96a7220f851e-kube-api-access-pwthd\") pod \"nmstate-operator-694c9596b7-tjfgz\" (UID: \"914b3f92-c030-4d1e-8454-96a7220f851e\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-tjfgz" Feb 14 04:24:21 crc kubenswrapper[4867]: I0214 04:24:21.490759 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-tjfgz" Feb 14 04:24:21 crc kubenswrapper[4867]: I0214 04:24:21.830183 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-tjfgz"] Feb 14 04:24:22 crc kubenswrapper[4867]: I0214 04:24:22.490937 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-tjfgz" event={"ID":"914b3f92-c030-4d1e-8454-96a7220f851e","Type":"ContainerStarted","Data":"6bdb56fc6f29899e41d5a95bb762934117046b0d52eec86a1351d05e29a285ae"} Feb 14 04:24:23 crc kubenswrapper[4867]: I0214 04:24:23.507158 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:23 crc kubenswrapper[4867]: I0214 04:24:23.507597 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:24 crc kubenswrapper[4867]: I0214 04:24:24.551155 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t9pkj" podUID="5ad1164b-e852-484b-b290-6d32e24d3d8e" containerName="registry-server" probeResult="failure" output=< Feb 14 04:24:24 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 04:24:24 crc kubenswrapper[4867]: > Feb 14 04:24:25 crc kubenswrapper[4867]: I0214 04:24:25.516204 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-tjfgz" event={"ID":"914b3f92-c030-4d1e-8454-96a7220f851e","Type":"ContainerStarted","Data":"799d565c553e09c7f8e1cee56462c881ef66c749993e6574e9634e536fa08fc5"} Feb 14 04:24:25 crc kubenswrapper[4867]: I0214 04:24:25.546419 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-tjfgz" podStartSLOduration=1.668434808 podStartE2EDuration="4.546403079s" podCreationTimestamp="2026-02-14 04:24:21 +0000 UTC" firstStartedPulling="2026-02-14 04:24:21.835117248 +0000 UTC m=+893.916054562" lastFinishedPulling="2026-02-14 04:24:24.713085519 +0000 UTC m=+896.794022833" observedRunningTime="2026-02-14 04:24:25.545128306 +0000 UTC m=+897.626065630" watchObservedRunningTime="2026-02-14 04:24:25.546403079 +0000 UTC m=+897.627340393" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.397205 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-57gj6"] Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.398935 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-57gj6" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.405296 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-w9w2j" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.407700 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf"] Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.408445 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.410053 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.417021 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s62hl\" (UniqueName: \"kubernetes.io/projected/fdb6e297-9da3-41ff-a6f3-de81833178c8-kube-api-access-s62hl\") pod \"nmstate-webhook-866bcb46dc-khbvf\" (UID: \"fdb6e297-9da3-41ff-a6f3-de81833178c8\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.417067 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pgbn\" (UniqueName: \"kubernetes.io/projected/c9fcfe59-df8c-4433-a47f-8b07f90d98bc-kube-api-access-7pgbn\") pod \"nmstate-metrics-58c85c668d-57gj6\" (UID: \"c9fcfe59-df8c-4433-a47f-8b07f90d98bc\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-57gj6" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.417128 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fdb6e297-9da3-41ff-a6f3-de81833178c8-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-khbvf\" (UID: \"fdb6e297-9da3-41ff-a6f3-de81833178c8\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.417186 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-57gj6"] Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.473005 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf"] Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.502239 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-k6p82"] Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.504156 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-k6p82" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.518292 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s62hl\" (UniqueName: \"kubernetes.io/projected/fdb6e297-9da3-41ff-a6f3-de81833178c8-kube-api-access-s62hl\") pod \"nmstate-webhook-866bcb46dc-khbvf\" (UID: \"fdb6e297-9da3-41ff-a6f3-de81833178c8\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.518346 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pgbn\" (UniqueName: \"kubernetes.io/projected/c9fcfe59-df8c-4433-a47f-8b07f90d98bc-kube-api-access-7pgbn\") pod \"nmstate-metrics-58c85c668d-57gj6\" (UID: \"c9fcfe59-df8c-4433-a47f-8b07f90d98bc\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-57gj6" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.518419 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fdb6e297-9da3-41ff-a6f3-de81833178c8-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-khbvf\" (UID: \"fdb6e297-9da3-41ff-a6f3-de81833178c8\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf" Feb 14 04:24:32 crc kubenswrapper[4867]: E0214 04:24:32.518584 4867 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 14 04:24:32 crc kubenswrapper[4867]: E0214 04:24:32.518630 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fdb6e297-9da3-41ff-a6f3-de81833178c8-tls-key-pair podName:fdb6e297-9da3-41ff-a6f3-de81833178c8 nodeName:}" failed. No retries permitted until 2026-02-14 04:24:33.018613355 +0000 UTC m=+905.099550669 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/fdb6e297-9da3-41ff-a6f3-de81833178c8-tls-key-pair") pod "nmstate-webhook-866bcb46dc-khbvf" (UID: "fdb6e297-9da3-41ff-a6f3-de81833178c8") : secret "openshift-nmstate-webhook" not found Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.543538 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s62hl\" (UniqueName: \"kubernetes.io/projected/fdb6e297-9da3-41ff-a6f3-de81833178c8-kube-api-access-s62hl\") pod \"nmstate-webhook-866bcb46dc-khbvf\" (UID: \"fdb6e297-9da3-41ff-a6f3-de81833178c8\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.544938 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pgbn\" (UniqueName: \"kubernetes.io/projected/c9fcfe59-df8c-4433-a47f-8b07f90d98bc-kube-api-access-7pgbn\") pod \"nmstate-metrics-58c85c668d-57gj6\" (UID: \"c9fcfe59-df8c-4433-a47f-8b07f90d98bc\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-57gj6" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.613215 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77"] Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.614387 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.616027 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-h5hx7" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.617911 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.620618 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa-ovs-socket\") pod \"nmstate-handler-k6p82\" (UID: \"ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa\") " pod="openshift-nmstate/nmstate-handler-k6p82" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.620885 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa-nmstate-lock\") pod \"nmstate-handler-k6p82\" (UID: \"ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa\") " pod="openshift-nmstate/nmstate-handler-k6p82" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.621031 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa-dbus-socket\") pod \"nmstate-handler-k6p82\" (UID: \"ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa\") " pod="openshift-nmstate/nmstate-handler-k6p82" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.621080 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8pgp\" (UniqueName: \"kubernetes.io/projected/ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa-kube-api-access-f8pgp\") pod \"nmstate-handler-k6p82\" (UID: \"ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa\") " pod="openshift-nmstate/nmstate-handler-k6p82" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.625765 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.626480 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77"] Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.717149 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-57gj6" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.723020 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa-dbus-socket\") pod \"nmstate-handler-k6p82\" (UID: \"ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa\") " pod="openshift-nmstate/nmstate-handler-k6p82" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.723070 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8pgp\" (UniqueName: \"kubernetes.io/projected/ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa-kube-api-access-f8pgp\") pod \"nmstate-handler-k6p82\" (UID: \"ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa\") " pod="openshift-nmstate/nmstate-handler-k6p82" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.723104 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd1547ee-0518-45af-bb63-9001da6fa7de-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-xwq77\" (UID: \"bd1547ee-0518-45af-bb63-9001da6fa7de\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.723142 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bd1547ee-0518-45af-bb63-9001da6fa7de-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-xwq77\" (UID: \"bd1547ee-0518-45af-bb63-9001da6fa7de\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.723169 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vgjx\" (UniqueName: \"kubernetes.io/projected/bd1547ee-0518-45af-bb63-9001da6fa7de-kube-api-access-8vgjx\") pod \"nmstate-console-plugin-5c78fc5d65-xwq77\" (UID: \"bd1547ee-0518-45af-bb63-9001da6fa7de\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.723201 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa-ovs-socket\") pod \"nmstate-handler-k6p82\" (UID: \"ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa\") " pod="openshift-nmstate/nmstate-handler-k6p82" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.723238 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa-nmstate-lock\") pod \"nmstate-handler-k6p82\" (UID: \"ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa\") " pod="openshift-nmstate/nmstate-handler-k6p82" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.723331 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa-nmstate-lock\") pod \"nmstate-handler-k6p82\" (UID: \"ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa\") " pod="openshift-nmstate/nmstate-handler-k6p82" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.723648 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa-ovs-socket\") pod \"nmstate-handler-k6p82\" (UID: \"ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa\") " pod="openshift-nmstate/nmstate-handler-k6p82" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.723719 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa-dbus-socket\") pod \"nmstate-handler-k6p82\" (UID: \"ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa\") " pod="openshift-nmstate/nmstate-handler-k6p82" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.775978 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8pgp\" (UniqueName: \"kubernetes.io/projected/ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa-kube-api-access-f8pgp\") pod \"nmstate-handler-k6p82\" (UID: \"ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa\") " pod="openshift-nmstate/nmstate-handler-k6p82" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.824804 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-k6p82" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.825443 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd1547ee-0518-45af-bb63-9001da6fa7de-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-xwq77\" (UID: \"bd1547ee-0518-45af-bb63-9001da6fa7de\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.825537 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bd1547ee-0518-45af-bb63-9001da6fa7de-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-xwq77\" (UID: \"bd1547ee-0518-45af-bb63-9001da6fa7de\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.825583 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vgjx\" (UniqueName: \"kubernetes.io/projected/bd1547ee-0518-45af-bb63-9001da6fa7de-kube-api-access-8vgjx\") pod \"nmstate-console-plugin-5c78fc5d65-xwq77\" (UID: \"bd1547ee-0518-45af-bb63-9001da6fa7de\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77" Feb 14 04:24:32 crc kubenswrapper[4867]: E0214 04:24:32.825626 4867 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 14 04:24:32 crc kubenswrapper[4867]: E0214 04:24:32.825692 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bd1547ee-0518-45af-bb63-9001da6fa7de-plugin-serving-cert podName:bd1547ee-0518-45af-bb63-9001da6fa7de nodeName:}" failed. No retries permitted until 2026-02-14 04:24:33.32567724 +0000 UTC m=+905.406614554 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/bd1547ee-0518-45af-bb63-9001da6fa7de-plugin-serving-cert") pod "nmstate-console-plugin-5c78fc5d65-xwq77" (UID: "bd1547ee-0518-45af-bb63-9001da6fa7de") : secret "plugin-serving-cert" not found Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.826302 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/bd1547ee-0518-45af-bb63-9001da6fa7de-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-xwq77\" (UID: \"bd1547ee-0518-45af-bb63-9001da6fa7de\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.830484 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6c8864b6b5-mwdd6"] Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.831412 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.853862 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6c8864b6b5-mwdd6"] Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.863327 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vgjx\" (UniqueName: \"kubernetes.io/projected/bd1547ee-0518-45af-bb63-9001da6fa7de-kube-api-access-8vgjx\") pod \"nmstate-console-plugin-5c78fc5d65-xwq77\" (UID: \"bd1547ee-0518-45af-bb63-9001da6fa7de\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.927150 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-config\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.927189 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-service-ca\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.927221 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-trusted-ca-bundle\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.927252 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnn87\" (UniqueName: \"kubernetes.io/projected/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-kube-api-access-lnn87\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.927376 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-oauth-config\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.927403 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-oauth-serving-cert\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:32 crc kubenswrapper[4867]: I0214 04:24:32.927426 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-serving-cert\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.036744 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-oauth-serving-cert\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.036796 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-serving-cert\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.036886 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-config\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.036908 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-service-ca\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.036928 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-trusted-ca-bundle\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.036962 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnn87\" (UniqueName: \"kubernetes.io/projected/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-kube-api-access-lnn87\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.037062 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fdb6e297-9da3-41ff-a6f3-de81833178c8-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-khbvf\" (UID: \"fdb6e297-9da3-41ff-a6f3-de81833178c8\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.037098 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-oauth-config\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.043437 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-oauth-config\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.044400 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-service-ca\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.044991 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-oauth-serving-cert\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.045157 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-config\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.045685 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-trusted-ca-bundle\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.048568 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fdb6e297-9da3-41ff-a6f3-de81833178c8-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-khbvf\" (UID: \"fdb6e297-9da3-41ff-a6f3-de81833178c8\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.065319 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-serving-cert\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.068374 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.072275 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnn87\" (UniqueName: \"kubernetes.io/projected/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-kube-api-access-lnn87\") pod \"console-6c8864b6b5-mwdd6\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.158369 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.353874 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd1547ee-0518-45af-bb63-9001da6fa7de-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-xwq77\" (UID: \"bd1547ee-0518-45af-bb63-9001da6fa7de\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.358859 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd1547ee-0518-45af-bb63-9001da6fa7de-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-xwq77\" (UID: \"bd1547ee-0518-45af-bb63-9001da6fa7de\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.434473 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-57gj6"] Feb 14 04:24:33 crc kubenswrapper[4867]: W0214 04:24:33.448379 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9fcfe59_df8c_4433_a47f_8b07f90d98bc.slice/crio-a0e66ea9c79f8d6e023eb262660f5323bc21fdd4846407aaf96dc2cdaf5e6029 WatchSource:0}: Error finding container a0e66ea9c79f8d6e023eb262660f5323bc21fdd4846407aaf96dc2cdaf5e6029: Status 404 returned error can't find the container with id a0e66ea9c79f8d6e023eb262660f5323bc21fdd4846407aaf96dc2cdaf5e6029 Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.535892 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.564558 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.613386 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-57gj6" event={"ID":"c9fcfe59-df8c-4433-a47f-8b07f90d98bc","Type":"ContainerStarted","Data":"a0e66ea9c79f8d6e023eb262660f5323bc21fdd4846407aaf96dc2cdaf5e6029"} Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.615923 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-k6p82" event={"ID":"ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa","Type":"ContainerStarted","Data":"b91c206570513148d6d5eff0600ac77cf7f699da03c866663af40025b8c9f3b6"} Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.636831 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.642431 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf"] Feb 14 04:24:33 crc kubenswrapper[4867]: W0214 04:24:33.647004 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdb6e297_9da3_41ff_a6f3_de81833178c8.slice/crio-689889f9b3e174df210a7d68d031b1261c6773a4fe020c63face34083ea6736a WatchSource:0}: Error finding container 689889f9b3e174df210a7d68d031b1261c6773a4fe020c63face34083ea6736a: Status 404 returned error can't find the container with id 689889f9b3e174df210a7d68d031b1261c6773a4fe020c63face34083ea6736a Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.774678 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6c8864b6b5-mwdd6"] Feb 14 04:24:33 crc kubenswrapper[4867]: I0214 04:24:33.805496 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t9pkj"] Feb 14 04:24:34 crc kubenswrapper[4867]: I0214 04:24:34.084635 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77"] Feb 14 04:24:34 crc kubenswrapper[4867]: W0214 04:24:34.095002 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd1547ee_0518_45af_bb63_9001da6fa7de.slice/crio-fde8b37addf77f8549b56f0db91acffea96ae7b27f2a47862316f331aa921780 WatchSource:0}: Error finding container fde8b37addf77f8549b56f0db91acffea96ae7b27f2a47862316f331aa921780: Status 404 returned error can't find the container with id fde8b37addf77f8549b56f0db91acffea96ae7b27f2a47862316f331aa921780 Feb 14 04:24:34 crc kubenswrapper[4867]: I0214 04:24:34.625136 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf" event={"ID":"fdb6e297-9da3-41ff-a6f3-de81833178c8","Type":"ContainerStarted","Data":"689889f9b3e174df210a7d68d031b1261c6773a4fe020c63face34083ea6736a"} Feb 14 04:24:34 crc kubenswrapper[4867]: I0214 04:24:34.626927 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77" event={"ID":"bd1547ee-0518-45af-bb63-9001da6fa7de","Type":"ContainerStarted","Data":"fde8b37addf77f8549b56f0db91acffea96ae7b27f2a47862316f331aa921780"} Feb 14 04:24:34 crc kubenswrapper[4867]: I0214 04:24:34.629092 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t9pkj" podUID="5ad1164b-e852-484b-b290-6d32e24d3d8e" containerName="registry-server" containerID="cri-o://49c660544513666115d86e4b4e0e7ddb150debdf0be5426823aa42b267bda347" gracePeriod=2 Feb 14 04:24:34 crc kubenswrapper[4867]: I0214 04:24:34.629656 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c8864b6b5-mwdd6" event={"ID":"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602","Type":"ContainerStarted","Data":"c2a0f0ef4fc35a56210a1bd277b9f8c3dbe6b717fe6cba021a58146d554cbf3e"} Feb 14 04:24:34 crc kubenswrapper[4867]: I0214 04:24:34.629706 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c8864b6b5-mwdd6" event={"ID":"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602","Type":"ContainerStarted","Data":"999f569ca24af828fccac613f37abfd55e6b13b288390e3bcddcc9896a94a3f7"} Feb 14 04:24:34 crc kubenswrapper[4867]: I0214 04:24:34.670349 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6c8864b6b5-mwdd6" podStartSLOduration=2.670321785 podStartE2EDuration="2.670321785s" podCreationTimestamp="2026-02-14 04:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:24:34.658472437 +0000 UTC m=+906.739409791" watchObservedRunningTime="2026-02-14 04:24:34.670321785 +0000 UTC m=+906.751259099" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.182793 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.297070 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ad1164b-e852-484b-b290-6d32e24d3d8e-catalog-content\") pod \"5ad1164b-e852-484b-b290-6d32e24d3d8e\" (UID: \"5ad1164b-e852-484b-b290-6d32e24d3d8e\") " Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.297140 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2z7g\" (UniqueName: \"kubernetes.io/projected/5ad1164b-e852-484b-b290-6d32e24d3d8e-kube-api-access-p2z7g\") pod \"5ad1164b-e852-484b-b290-6d32e24d3d8e\" (UID: \"5ad1164b-e852-484b-b290-6d32e24d3d8e\") " Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.297305 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ad1164b-e852-484b-b290-6d32e24d3d8e-utilities\") pod \"5ad1164b-e852-484b-b290-6d32e24d3d8e\" (UID: \"5ad1164b-e852-484b-b290-6d32e24d3d8e\") " Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.298477 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ad1164b-e852-484b-b290-6d32e24d3d8e-utilities" (OuterVolumeSpecName: "utilities") pod "5ad1164b-e852-484b-b290-6d32e24d3d8e" (UID: "5ad1164b-e852-484b-b290-6d32e24d3d8e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.322293 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ad1164b-e852-484b-b290-6d32e24d3d8e-kube-api-access-p2z7g" (OuterVolumeSpecName: "kube-api-access-p2z7g") pod "5ad1164b-e852-484b-b290-6d32e24d3d8e" (UID: "5ad1164b-e852-484b-b290-6d32e24d3d8e"). InnerVolumeSpecName "kube-api-access-p2z7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.399855 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2z7g\" (UniqueName: \"kubernetes.io/projected/5ad1164b-e852-484b-b290-6d32e24d3d8e-kube-api-access-p2z7g\") on node \"crc\" DevicePath \"\"" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.400108 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ad1164b-e852-484b-b290-6d32e24d3d8e-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.432704 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ad1164b-e852-484b-b290-6d32e24d3d8e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ad1164b-e852-484b-b290-6d32e24d3d8e" (UID: "5ad1164b-e852-484b-b290-6d32e24d3d8e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.501705 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ad1164b-e852-484b-b290-6d32e24d3d8e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.641166 4867 generic.go:334] "Generic (PLEG): container finished" podID="5ad1164b-e852-484b-b290-6d32e24d3d8e" containerID="49c660544513666115d86e4b4e0e7ddb150debdf0be5426823aa42b267bda347" exitCode=0 Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.641269 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t9pkj" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.641298 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9pkj" event={"ID":"5ad1164b-e852-484b-b290-6d32e24d3d8e","Type":"ContainerDied","Data":"49c660544513666115d86e4b4e0e7ddb150debdf0be5426823aa42b267bda347"} Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.641412 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t9pkj" event={"ID":"5ad1164b-e852-484b-b290-6d32e24d3d8e","Type":"ContainerDied","Data":"0d2d792529598a3e2dcb124798b2124c8ff8af3c6b135bf970fd763b2135679b"} Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.641437 4867 scope.go:117] "RemoveContainer" containerID="49c660544513666115d86e4b4e0e7ddb150debdf0be5426823aa42b267bda347" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.673004 4867 scope.go:117] "RemoveContainer" containerID="d771e48ac93f52501b02ff902419db985dbdc75b66d985499f576fcba9d2c8cf" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.674128 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t9pkj"] Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.680078 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t9pkj"] Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.729209 4867 scope.go:117] "RemoveContainer" containerID="59213eb5738330920a318fc901360e949cd08966f19e4aeca5c0862bcdd388f1" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.781733 4867 scope.go:117] "RemoveContainer" containerID="49c660544513666115d86e4b4e0e7ddb150debdf0be5426823aa42b267bda347" Feb 14 04:24:35 crc kubenswrapper[4867]: E0214 04:24:35.782287 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49c660544513666115d86e4b4e0e7ddb150debdf0be5426823aa42b267bda347\": container with ID starting with 49c660544513666115d86e4b4e0e7ddb150debdf0be5426823aa42b267bda347 not found: ID does not exist" containerID="49c660544513666115d86e4b4e0e7ddb150debdf0be5426823aa42b267bda347" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.782339 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49c660544513666115d86e4b4e0e7ddb150debdf0be5426823aa42b267bda347"} err="failed to get container status \"49c660544513666115d86e4b4e0e7ddb150debdf0be5426823aa42b267bda347\": rpc error: code = NotFound desc = could not find container \"49c660544513666115d86e4b4e0e7ddb150debdf0be5426823aa42b267bda347\": container with ID starting with 49c660544513666115d86e4b4e0e7ddb150debdf0be5426823aa42b267bda347 not found: ID does not exist" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.782371 4867 scope.go:117] "RemoveContainer" containerID="d771e48ac93f52501b02ff902419db985dbdc75b66d985499f576fcba9d2c8cf" Feb 14 04:24:35 crc kubenswrapper[4867]: E0214 04:24:35.783954 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d771e48ac93f52501b02ff902419db985dbdc75b66d985499f576fcba9d2c8cf\": container with ID starting with d771e48ac93f52501b02ff902419db985dbdc75b66d985499f576fcba9d2c8cf not found: ID does not exist" containerID="d771e48ac93f52501b02ff902419db985dbdc75b66d985499f576fcba9d2c8cf" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.783974 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d771e48ac93f52501b02ff902419db985dbdc75b66d985499f576fcba9d2c8cf"} err="failed to get container status \"d771e48ac93f52501b02ff902419db985dbdc75b66d985499f576fcba9d2c8cf\": rpc error: code = NotFound desc = could not find container \"d771e48ac93f52501b02ff902419db985dbdc75b66d985499f576fcba9d2c8cf\": container with ID starting with d771e48ac93f52501b02ff902419db985dbdc75b66d985499f576fcba9d2c8cf not found: ID does not exist" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.783989 4867 scope.go:117] "RemoveContainer" containerID="59213eb5738330920a318fc901360e949cd08966f19e4aeca5c0862bcdd388f1" Feb 14 04:24:35 crc kubenswrapper[4867]: E0214 04:24:35.784517 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59213eb5738330920a318fc901360e949cd08966f19e4aeca5c0862bcdd388f1\": container with ID starting with 59213eb5738330920a318fc901360e949cd08966f19e4aeca5c0862bcdd388f1 not found: ID does not exist" containerID="59213eb5738330920a318fc901360e949cd08966f19e4aeca5c0862bcdd388f1" Feb 14 04:24:35 crc kubenswrapper[4867]: I0214 04:24:35.784550 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59213eb5738330920a318fc901360e949cd08966f19e4aeca5c0862bcdd388f1"} err="failed to get container status \"59213eb5738330920a318fc901360e949cd08966f19e4aeca5c0862bcdd388f1\": rpc error: code = NotFound desc = could not find container \"59213eb5738330920a318fc901360e949cd08966f19e4aeca5c0862bcdd388f1\": container with ID starting with 59213eb5738330920a318fc901360e949cd08966f19e4aeca5c0862bcdd388f1 not found: ID does not exist" Feb 14 04:24:37 crc kubenswrapper[4867]: I0214 04:24:37.007628 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ad1164b-e852-484b-b290-6d32e24d3d8e" path="/var/lib/kubelet/pods/5ad1164b-e852-484b-b290-6d32e24d3d8e/volumes" Feb 14 04:24:37 crc kubenswrapper[4867]: I0214 04:24:37.662684 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-57gj6" event={"ID":"c9fcfe59-df8c-4433-a47f-8b07f90d98bc","Type":"ContainerStarted","Data":"4e20e0b744f80694499d413e780a6fe175467627a13bcf53143fa0e3950eb199"} Feb 14 04:24:37 crc kubenswrapper[4867]: I0214 04:24:37.664866 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77" event={"ID":"bd1547ee-0518-45af-bb63-9001da6fa7de","Type":"ContainerStarted","Data":"d53e858246870ebc705630eeddac5777f35bf1ff9c3e7e2104365186b6739e00"} Feb 14 04:24:37 crc kubenswrapper[4867]: I0214 04:24:37.667339 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf" event={"ID":"fdb6e297-9da3-41ff-a6f3-de81833178c8","Type":"ContainerStarted","Data":"5e85c689adbcce35a22683e54c5bb7f86cec1fbb103cf18d989ea1230fc5d615"} Feb 14 04:24:37 crc kubenswrapper[4867]: I0214 04:24:37.682140 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-xwq77" podStartSLOduration=2.463098864 podStartE2EDuration="5.68211797s" podCreationTimestamp="2026-02-14 04:24:32 +0000 UTC" firstStartedPulling="2026-02-14 04:24:34.097564671 +0000 UTC m=+906.178501975" lastFinishedPulling="2026-02-14 04:24:37.316583767 +0000 UTC m=+909.397521081" observedRunningTime="2026-02-14 04:24:37.677907341 +0000 UTC m=+909.758844655" watchObservedRunningTime="2026-02-14 04:24:37.68211797 +0000 UTC m=+909.763055284" Feb 14 04:24:38 crc kubenswrapper[4867]: I0214 04:24:38.675235 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-k6p82" event={"ID":"ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa","Type":"ContainerStarted","Data":"b008e3ff644420661244317668e9c1ae0286046bea6a7ee3f1a5406bea640614"} Feb 14 04:24:38 crc kubenswrapper[4867]: I0214 04:24:38.675668 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf" Feb 14 04:24:38 crc kubenswrapper[4867]: I0214 04:24:38.696917 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-k6p82" podStartSLOduration=2.292169434 podStartE2EDuration="6.696890074s" podCreationTimestamp="2026-02-14 04:24:32 +0000 UTC" firstStartedPulling="2026-02-14 04:24:32.941673212 +0000 UTC m=+905.022610526" lastFinishedPulling="2026-02-14 04:24:37.346393852 +0000 UTC m=+909.427331166" observedRunningTime="2026-02-14 04:24:38.687208683 +0000 UTC m=+910.768145997" watchObservedRunningTime="2026-02-14 04:24:38.696890074 +0000 UTC m=+910.777827418" Feb 14 04:24:38 crc kubenswrapper[4867]: I0214 04:24:38.714007 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf" podStartSLOduration=3.040199081 podStartE2EDuration="6.713983138s" podCreationTimestamp="2026-02-14 04:24:32 +0000 UTC" firstStartedPulling="2026-02-14 04:24:33.649828853 +0000 UTC m=+905.730766167" lastFinishedPulling="2026-02-14 04:24:37.32361291 +0000 UTC m=+909.404550224" observedRunningTime="2026-02-14 04:24:38.701658838 +0000 UTC m=+910.782596162" watchObservedRunningTime="2026-02-14 04:24:38.713983138 +0000 UTC m=+910.794920492" Feb 14 04:24:39 crc kubenswrapper[4867]: I0214 04:24:39.682547 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-k6p82" Feb 14 04:24:40 crc kubenswrapper[4867]: I0214 04:24:40.690536 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-57gj6" event={"ID":"c9fcfe59-df8c-4433-a47f-8b07f90d98bc","Type":"ContainerStarted","Data":"94f36c17e98ae4ab51239fa7c7e1510c22698ff66613e759b06ddc01e8aca414"} Feb 14 04:24:40 crc kubenswrapper[4867]: I0214 04:24:40.712298 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-57gj6" podStartSLOduration=1.8842576009999998 podStartE2EDuration="8.712273123s" podCreationTimestamp="2026-02-14 04:24:32 +0000 UTC" firstStartedPulling="2026-02-14 04:24:33.45257067 +0000 UTC m=+905.533507984" lastFinishedPulling="2026-02-14 04:24:40.280586192 +0000 UTC m=+912.361523506" observedRunningTime="2026-02-14 04:24:40.705813135 +0000 UTC m=+912.786750449" watchObservedRunningTime="2026-02-14 04:24:40.712273123 +0000 UTC m=+912.793210437" Feb 14 04:24:42 crc kubenswrapper[4867]: I0214 04:24:42.850650 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-k6p82" Feb 14 04:24:43 crc kubenswrapper[4867]: I0214 04:24:43.159954 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:43 crc kubenswrapper[4867]: I0214 04:24:43.159991 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:43 crc kubenswrapper[4867]: I0214 04:24:43.165624 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:43 crc kubenswrapper[4867]: I0214 04:24:43.721266 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:24:43 crc kubenswrapper[4867]: I0214 04:24:43.794284 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6687988ff8-hggh9"] Feb 14 04:24:53 crc kubenswrapper[4867]: I0214 04:24:53.078197 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf" Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.250972 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.251737 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.734633 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7m8sv"] Feb 14 04:25:01 crc kubenswrapper[4867]: E0214 04:25:01.735026 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ad1164b-e852-484b-b290-6d32e24d3d8e" containerName="extract-utilities" Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.735048 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ad1164b-e852-484b-b290-6d32e24d3d8e" containerName="extract-utilities" Feb 14 04:25:01 crc kubenswrapper[4867]: E0214 04:25:01.735078 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ad1164b-e852-484b-b290-6d32e24d3d8e" containerName="registry-server" Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.735088 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ad1164b-e852-484b-b290-6d32e24d3d8e" containerName="registry-server" Feb 14 04:25:01 crc kubenswrapper[4867]: E0214 04:25:01.735104 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ad1164b-e852-484b-b290-6d32e24d3d8e" containerName="extract-content" Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.735112 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ad1164b-e852-484b-b290-6d32e24d3d8e" containerName="extract-content" Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.735284 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ad1164b-e852-484b-b290-6d32e24d3d8e" containerName="registry-server" Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.736678 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.756574 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m8sv"] Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.806766 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d98e15fa-a08a-4710-a903-60a1af5ff85c-catalog-content\") pod \"redhat-marketplace-7m8sv\" (UID: \"d98e15fa-a08a-4710-a903-60a1af5ff85c\") " pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.806918 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d98e15fa-a08a-4710-a903-60a1af5ff85c-utilities\") pod \"redhat-marketplace-7m8sv\" (UID: \"d98e15fa-a08a-4710-a903-60a1af5ff85c\") " pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.806962 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgcq4\" (UniqueName: \"kubernetes.io/projected/d98e15fa-a08a-4710-a903-60a1af5ff85c-kube-api-access-fgcq4\") pod \"redhat-marketplace-7m8sv\" (UID: \"d98e15fa-a08a-4710-a903-60a1af5ff85c\") " pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.907895 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d98e15fa-a08a-4710-a903-60a1af5ff85c-catalog-content\") pod \"redhat-marketplace-7m8sv\" (UID: \"d98e15fa-a08a-4710-a903-60a1af5ff85c\") " pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.907991 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d98e15fa-a08a-4710-a903-60a1af5ff85c-utilities\") pod \"redhat-marketplace-7m8sv\" (UID: \"d98e15fa-a08a-4710-a903-60a1af5ff85c\") " pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.908032 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgcq4\" (UniqueName: \"kubernetes.io/projected/d98e15fa-a08a-4710-a903-60a1af5ff85c-kube-api-access-fgcq4\") pod \"redhat-marketplace-7m8sv\" (UID: \"d98e15fa-a08a-4710-a903-60a1af5ff85c\") " pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.909019 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d98e15fa-a08a-4710-a903-60a1af5ff85c-catalog-content\") pod \"redhat-marketplace-7m8sv\" (UID: \"d98e15fa-a08a-4710-a903-60a1af5ff85c\") " pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.909043 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d98e15fa-a08a-4710-a903-60a1af5ff85c-utilities\") pod \"redhat-marketplace-7m8sv\" (UID: \"d98e15fa-a08a-4710-a903-60a1af5ff85c\") " pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:01 crc kubenswrapper[4867]: I0214 04:25:01.931421 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgcq4\" (UniqueName: \"kubernetes.io/projected/d98e15fa-a08a-4710-a903-60a1af5ff85c-kube-api-access-fgcq4\") pod \"redhat-marketplace-7m8sv\" (UID: \"d98e15fa-a08a-4710-a903-60a1af5ff85c\") " pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:02 crc kubenswrapper[4867]: I0214 04:25:02.054763 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:02 crc kubenswrapper[4867]: I0214 04:25:02.621943 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m8sv"] Feb 14 04:25:02 crc kubenswrapper[4867]: I0214 04:25:02.892119 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m8sv" event={"ID":"d98e15fa-a08a-4710-a903-60a1af5ff85c","Type":"ContainerStarted","Data":"be841d62f3009374faac139bf7c9000724217c0d043cee6d7f13deb00ae9eb01"} Feb 14 04:25:02 crc kubenswrapper[4867]: I0214 04:25:02.892722 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m8sv" event={"ID":"d98e15fa-a08a-4710-a903-60a1af5ff85c","Type":"ContainerStarted","Data":"0f8af18980c35ed58409e3eef5c5ce346989fd381ffc5f93082d6eedce320de7"} Feb 14 04:25:03 crc kubenswrapper[4867]: I0214 04:25:03.908589 4867 generic.go:334] "Generic (PLEG): container finished" podID="d98e15fa-a08a-4710-a903-60a1af5ff85c" containerID="be841d62f3009374faac139bf7c9000724217c0d043cee6d7f13deb00ae9eb01" exitCode=0 Feb 14 04:25:03 crc kubenswrapper[4867]: I0214 04:25:03.909086 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m8sv" event={"ID":"d98e15fa-a08a-4710-a903-60a1af5ff85c","Type":"ContainerDied","Data":"be841d62f3009374faac139bf7c9000724217c0d043cee6d7f13deb00ae9eb01"} Feb 14 04:25:03 crc kubenswrapper[4867]: I0214 04:25:03.912402 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 04:25:05 crc kubenswrapper[4867]: I0214 04:25:05.934051 4867 generic.go:334] "Generic (PLEG): container finished" podID="d98e15fa-a08a-4710-a903-60a1af5ff85c" containerID="781f5d11a5a6f97d66dc2c2ec0eae435679b8dd4779f05f225fb3ce5dd559a2c" exitCode=0 Feb 14 04:25:05 crc kubenswrapper[4867]: I0214 04:25:05.934177 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m8sv" event={"ID":"d98e15fa-a08a-4710-a903-60a1af5ff85c","Type":"ContainerDied","Data":"781f5d11a5a6f97d66dc2c2ec0eae435679b8dd4779f05f225fb3ce5dd559a2c"} Feb 14 04:25:06 crc kubenswrapper[4867]: I0214 04:25:06.959093 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m8sv" event={"ID":"d98e15fa-a08a-4710-a903-60a1af5ff85c","Type":"ContainerStarted","Data":"800ec12ae40651afc5994d9e62ff224d7af9d3df94bec204c2ad4dc1516bde35"} Feb 14 04:25:06 crc kubenswrapper[4867]: I0214 04:25:06.988111 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7m8sv" podStartSLOduration=3.484357437 podStartE2EDuration="5.988093448s" podCreationTimestamp="2026-02-14 04:25:01 +0000 UTC" firstStartedPulling="2026-02-14 04:25:03.912166117 +0000 UTC m=+935.993103431" lastFinishedPulling="2026-02-14 04:25:06.415902128 +0000 UTC m=+938.496839442" observedRunningTime="2026-02-14 04:25:06.984134776 +0000 UTC m=+939.065072090" watchObservedRunningTime="2026-02-14 04:25:06.988093448 +0000 UTC m=+939.069030762" Feb 14 04:25:08 crc kubenswrapper[4867]: I0214 04:25:08.867427 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6687988ff8-hggh9" podUID="2d9ba4d6-e777-4a10-96d1-30a492f9ecf6" containerName="console" containerID="cri-o://3645eb9bc387f910ed152e2f9ff7796cc316b5c34a3967a69643a1f6d547d063" gracePeriod=15 Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.388331 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6687988ff8-hggh9_2d9ba4d6-e777-4a10-96d1-30a492f9ecf6/console/0.log" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.389804 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.463820 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-trusted-ca-bundle\") pod \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.463946 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-oauth-serving-cert\") pod \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.463995 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-oauth-config\") pod \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.464060 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm2vd\" (UniqueName: \"kubernetes.io/projected/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-kube-api-access-pm2vd\") pod \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.464117 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-config\") pod \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.464150 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-service-ca\") pod \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.464180 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-serving-cert\") pod \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\" (UID: \"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6\") " Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.465635 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-service-ca" (OuterVolumeSpecName: "service-ca") pod "2d9ba4d6-e777-4a10-96d1-30a492f9ecf6" (UID: "2d9ba4d6-e777-4a10-96d1-30a492f9ecf6"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.465633 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "2d9ba4d6-e777-4a10-96d1-30a492f9ecf6" (UID: "2d9ba4d6-e777-4a10-96d1-30a492f9ecf6"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.465757 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "2d9ba4d6-e777-4a10-96d1-30a492f9ecf6" (UID: "2d9ba4d6-e777-4a10-96d1-30a492f9ecf6"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.467671 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-config" (OuterVolumeSpecName: "console-config") pod "2d9ba4d6-e777-4a10-96d1-30a492f9ecf6" (UID: "2d9ba4d6-e777-4a10-96d1-30a492f9ecf6"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.475674 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "2d9ba4d6-e777-4a10-96d1-30a492f9ecf6" (UID: "2d9ba4d6-e777-4a10-96d1-30a492f9ecf6"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.476875 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "2d9ba4d6-e777-4a10-96d1-30a492f9ecf6" (UID: "2d9ba4d6-e777-4a10-96d1-30a492f9ecf6"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.477149 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-kube-api-access-pm2vd" (OuterVolumeSpecName: "kube-api-access-pm2vd") pod "2d9ba4d6-e777-4a10-96d1-30a492f9ecf6" (UID: "2d9ba4d6-e777-4a10-96d1-30a492f9ecf6"). InnerVolumeSpecName "kube-api-access-pm2vd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.567623 4867 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.567673 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.567687 4867 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.567701 4867 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.567717 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm2vd\" (UniqueName: \"kubernetes.io/projected/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-kube-api-access-pm2vd\") on node \"crc\" DevicePath \"\"" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.567736 4867 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-console-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.567780 4867 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.991829 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6687988ff8-hggh9_2d9ba4d6-e777-4a10-96d1-30a492f9ecf6/console/0.log" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.992177 4867 generic.go:334] "Generic (PLEG): container finished" podID="2d9ba4d6-e777-4a10-96d1-30a492f9ecf6" containerID="3645eb9bc387f910ed152e2f9ff7796cc316b5c34a3967a69643a1f6d547d063" exitCode=2 Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.992208 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6687988ff8-hggh9" event={"ID":"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6","Type":"ContainerDied","Data":"3645eb9bc387f910ed152e2f9ff7796cc316b5c34a3967a69643a1f6d547d063"} Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.992235 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6687988ff8-hggh9" event={"ID":"2d9ba4d6-e777-4a10-96d1-30a492f9ecf6","Type":"ContainerDied","Data":"129cdcd69132d20dcbb1f824da4d34637e927a59f414ddd5999cdc93d09a0538"} Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.992257 4867 scope.go:117] "RemoveContainer" containerID="3645eb9bc387f910ed152e2f9ff7796cc316b5c34a3967a69643a1f6d547d063" Feb 14 04:25:09 crc kubenswrapper[4867]: I0214 04:25:09.992291 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6687988ff8-hggh9" Feb 14 04:25:10 crc kubenswrapper[4867]: I0214 04:25:10.032947 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6687988ff8-hggh9"] Feb 14 04:25:10 crc kubenswrapper[4867]: I0214 04:25:10.036787 4867 scope.go:117] "RemoveContainer" containerID="3645eb9bc387f910ed152e2f9ff7796cc316b5c34a3967a69643a1f6d547d063" Feb 14 04:25:10 crc kubenswrapper[4867]: E0214 04:25:10.037690 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3645eb9bc387f910ed152e2f9ff7796cc316b5c34a3967a69643a1f6d547d063\": container with ID starting with 3645eb9bc387f910ed152e2f9ff7796cc316b5c34a3967a69643a1f6d547d063 not found: ID does not exist" containerID="3645eb9bc387f910ed152e2f9ff7796cc316b5c34a3967a69643a1f6d547d063" Feb 14 04:25:10 crc kubenswrapper[4867]: I0214 04:25:10.037740 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3645eb9bc387f910ed152e2f9ff7796cc316b5c34a3967a69643a1f6d547d063"} err="failed to get container status \"3645eb9bc387f910ed152e2f9ff7796cc316b5c34a3967a69643a1f6d547d063\": rpc error: code = NotFound desc = could not find container \"3645eb9bc387f910ed152e2f9ff7796cc316b5c34a3967a69643a1f6d547d063\": container with ID starting with 3645eb9bc387f910ed152e2f9ff7796cc316b5c34a3967a69643a1f6d547d063 not found: ID does not exist" Feb 14 04:25:10 crc kubenswrapper[4867]: I0214 04:25:10.039775 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6687988ff8-hggh9"] Feb 14 04:25:11 crc kubenswrapper[4867]: I0214 04:25:11.008376 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d9ba4d6-e777-4a10-96d1-30a492f9ecf6" path="/var/lib/kubelet/pods/2d9ba4d6-e777-4a10-96d1-30a492f9ecf6/volumes" Feb 14 04:25:12 crc kubenswrapper[4867]: I0214 04:25:12.055017 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:12 crc kubenswrapper[4867]: I0214 04:25:12.055110 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:12 crc kubenswrapper[4867]: I0214 04:25:12.108415 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:13 crc kubenswrapper[4867]: I0214 04:25:13.071973 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:14 crc kubenswrapper[4867]: I0214 04:25:14.444710 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m8sv"] Feb 14 04:25:14 crc kubenswrapper[4867]: I0214 04:25:14.703071 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn"] Feb 14 04:25:14 crc kubenswrapper[4867]: E0214 04:25:14.703459 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d9ba4d6-e777-4a10-96d1-30a492f9ecf6" containerName="console" Feb 14 04:25:14 crc kubenswrapper[4867]: I0214 04:25:14.703477 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d9ba4d6-e777-4a10-96d1-30a492f9ecf6" containerName="console" Feb 14 04:25:14 crc kubenswrapper[4867]: I0214 04:25:14.703689 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d9ba4d6-e777-4a10-96d1-30a492f9ecf6" containerName="console" Feb 14 04:25:14 crc kubenswrapper[4867]: I0214 04:25:14.705032 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" Feb 14 04:25:14 crc kubenswrapper[4867]: I0214 04:25:14.715653 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn"] Feb 14 04:25:14 crc kubenswrapper[4867]: I0214 04:25:14.720207 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 14 04:25:14 crc kubenswrapper[4867]: I0214 04:25:14.869571 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cc14a3a2-05fa-4675-bace-02675c564e5f-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn\" (UID: \"cc14a3a2-05fa-4675-bace-02675c564e5f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" Feb 14 04:25:14 crc kubenswrapper[4867]: I0214 04:25:14.869677 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cc14a3a2-05fa-4675-bace-02675c564e5f-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn\" (UID: \"cc14a3a2-05fa-4675-bace-02675c564e5f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" Feb 14 04:25:14 crc kubenswrapper[4867]: I0214 04:25:14.869707 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47s2h\" (UniqueName: \"kubernetes.io/projected/cc14a3a2-05fa-4675-bace-02675c564e5f-kube-api-access-47s2h\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn\" (UID: \"cc14a3a2-05fa-4675-bace-02675c564e5f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" Feb 14 04:25:14 crc kubenswrapper[4867]: I0214 04:25:14.972101 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cc14a3a2-05fa-4675-bace-02675c564e5f-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn\" (UID: \"cc14a3a2-05fa-4675-bace-02675c564e5f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" Feb 14 04:25:14 crc kubenswrapper[4867]: I0214 04:25:14.972809 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cc14a3a2-05fa-4675-bace-02675c564e5f-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn\" (UID: \"cc14a3a2-05fa-4675-bace-02675c564e5f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" Feb 14 04:25:14 crc kubenswrapper[4867]: I0214 04:25:14.972983 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47s2h\" (UniqueName: \"kubernetes.io/projected/cc14a3a2-05fa-4675-bace-02675c564e5f-kube-api-access-47s2h\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn\" (UID: \"cc14a3a2-05fa-4675-bace-02675c564e5f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" Feb 14 04:25:14 crc kubenswrapper[4867]: I0214 04:25:14.972822 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cc14a3a2-05fa-4675-bace-02675c564e5f-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn\" (UID: \"cc14a3a2-05fa-4675-bace-02675c564e5f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" Feb 14 04:25:14 crc kubenswrapper[4867]: I0214 04:25:14.973612 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cc14a3a2-05fa-4675-bace-02675c564e5f-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn\" (UID: \"cc14a3a2-05fa-4675-bace-02675c564e5f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" Feb 14 04:25:15 crc kubenswrapper[4867]: I0214 04:25:15.003669 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47s2h\" (UniqueName: \"kubernetes.io/projected/cc14a3a2-05fa-4675-bace-02675c564e5f-kube-api-access-47s2h\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn\" (UID: \"cc14a3a2-05fa-4675-bace-02675c564e5f\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" Feb 14 04:25:15 crc kubenswrapper[4867]: I0214 04:25:15.029568 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7m8sv" podUID="d98e15fa-a08a-4710-a903-60a1af5ff85c" containerName="registry-server" containerID="cri-o://800ec12ae40651afc5994d9e62ff224d7af9d3df94bec204c2ad4dc1516bde35" gracePeriod=2 Feb 14 04:25:15 crc kubenswrapper[4867]: I0214 04:25:15.040492 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" Feb 14 04:25:15 crc kubenswrapper[4867]: I0214 04:25:15.524129 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:15 crc kubenswrapper[4867]: I0214 04:25:15.596618 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn"] Feb 14 04:25:15 crc kubenswrapper[4867]: I0214 04:25:15.687435 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgcq4\" (UniqueName: \"kubernetes.io/projected/d98e15fa-a08a-4710-a903-60a1af5ff85c-kube-api-access-fgcq4\") pod \"d98e15fa-a08a-4710-a903-60a1af5ff85c\" (UID: \"d98e15fa-a08a-4710-a903-60a1af5ff85c\") " Feb 14 04:25:15 crc kubenswrapper[4867]: I0214 04:25:15.687521 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d98e15fa-a08a-4710-a903-60a1af5ff85c-utilities\") pod \"d98e15fa-a08a-4710-a903-60a1af5ff85c\" (UID: \"d98e15fa-a08a-4710-a903-60a1af5ff85c\") " Feb 14 04:25:15 crc kubenswrapper[4867]: I0214 04:25:15.687701 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d98e15fa-a08a-4710-a903-60a1af5ff85c-catalog-content\") pod \"d98e15fa-a08a-4710-a903-60a1af5ff85c\" (UID: \"d98e15fa-a08a-4710-a903-60a1af5ff85c\") " Feb 14 04:25:15 crc kubenswrapper[4867]: I0214 04:25:15.688826 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d98e15fa-a08a-4710-a903-60a1af5ff85c-utilities" (OuterVolumeSpecName: "utilities") pod "d98e15fa-a08a-4710-a903-60a1af5ff85c" (UID: "d98e15fa-a08a-4710-a903-60a1af5ff85c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:25:15 crc kubenswrapper[4867]: I0214 04:25:15.696710 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d98e15fa-a08a-4710-a903-60a1af5ff85c-kube-api-access-fgcq4" (OuterVolumeSpecName: "kube-api-access-fgcq4") pod "d98e15fa-a08a-4710-a903-60a1af5ff85c" (UID: "d98e15fa-a08a-4710-a903-60a1af5ff85c"). InnerVolumeSpecName "kube-api-access-fgcq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:25:15 crc kubenswrapper[4867]: I0214 04:25:15.718495 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d98e15fa-a08a-4710-a903-60a1af5ff85c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d98e15fa-a08a-4710-a903-60a1af5ff85c" (UID: "d98e15fa-a08a-4710-a903-60a1af5ff85c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:25:15 crc kubenswrapper[4867]: I0214 04:25:15.790172 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d98e15fa-a08a-4710-a903-60a1af5ff85c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:25:15 crc kubenswrapper[4867]: I0214 04:25:15.790217 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgcq4\" (UniqueName: \"kubernetes.io/projected/d98e15fa-a08a-4710-a903-60a1af5ff85c-kube-api-access-fgcq4\") on node \"crc\" DevicePath \"\"" Feb 14 04:25:15 crc kubenswrapper[4867]: I0214 04:25:15.790233 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d98e15fa-a08a-4710-a903-60a1af5ff85c-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.040012 4867 generic.go:334] "Generic (PLEG): container finished" podID="cc14a3a2-05fa-4675-bace-02675c564e5f" containerID="025cfdbf2a758606cb832c39de19cbd957cd6a91a34d8ad3c65d524e3f69a579" exitCode=0 Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.040127 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" event={"ID":"cc14a3a2-05fa-4675-bace-02675c564e5f","Type":"ContainerDied","Data":"025cfdbf2a758606cb832c39de19cbd957cd6a91a34d8ad3c65d524e3f69a579"} Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.040158 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" event={"ID":"cc14a3a2-05fa-4675-bace-02675c564e5f","Type":"ContainerStarted","Data":"2aaee53e90cd8a02d4834edc174933e454497e41bdbb5e7b0688f330535eb7cf"} Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.043329 4867 generic.go:334] "Generic (PLEG): container finished" podID="d98e15fa-a08a-4710-a903-60a1af5ff85c" containerID="800ec12ae40651afc5994d9e62ff224d7af9d3df94bec204c2ad4dc1516bde35" exitCode=0 Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.043378 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m8sv" event={"ID":"d98e15fa-a08a-4710-a903-60a1af5ff85c","Type":"ContainerDied","Data":"800ec12ae40651afc5994d9e62ff224d7af9d3df94bec204c2ad4dc1516bde35"} Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.043406 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7m8sv" event={"ID":"d98e15fa-a08a-4710-a903-60a1af5ff85c","Type":"ContainerDied","Data":"0f8af18980c35ed58409e3eef5c5ce346989fd381ffc5f93082d6eedce320de7"} Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.043430 4867 scope.go:117] "RemoveContainer" containerID="800ec12ae40651afc5994d9e62ff224d7af9d3df94bec204c2ad4dc1516bde35" Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.043605 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7m8sv" Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.063536 4867 scope.go:117] "RemoveContainer" containerID="781f5d11a5a6f97d66dc2c2ec0eae435679b8dd4779f05f225fb3ce5dd559a2c" Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.082827 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m8sv"] Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.090798 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7m8sv"] Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.099732 4867 scope.go:117] "RemoveContainer" containerID="be841d62f3009374faac139bf7c9000724217c0d043cee6d7f13deb00ae9eb01" Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.135073 4867 scope.go:117] "RemoveContainer" containerID="800ec12ae40651afc5994d9e62ff224d7af9d3df94bec204c2ad4dc1516bde35" Feb 14 04:25:16 crc kubenswrapper[4867]: E0214 04:25:16.135571 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"800ec12ae40651afc5994d9e62ff224d7af9d3df94bec204c2ad4dc1516bde35\": container with ID starting with 800ec12ae40651afc5994d9e62ff224d7af9d3df94bec204c2ad4dc1516bde35 not found: ID does not exist" containerID="800ec12ae40651afc5994d9e62ff224d7af9d3df94bec204c2ad4dc1516bde35" Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.135621 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"800ec12ae40651afc5994d9e62ff224d7af9d3df94bec204c2ad4dc1516bde35"} err="failed to get container status \"800ec12ae40651afc5994d9e62ff224d7af9d3df94bec204c2ad4dc1516bde35\": rpc error: code = NotFound desc = could not find container \"800ec12ae40651afc5994d9e62ff224d7af9d3df94bec204c2ad4dc1516bde35\": container with ID starting with 800ec12ae40651afc5994d9e62ff224d7af9d3df94bec204c2ad4dc1516bde35 not found: ID does not exist" Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.135650 4867 scope.go:117] "RemoveContainer" containerID="781f5d11a5a6f97d66dc2c2ec0eae435679b8dd4779f05f225fb3ce5dd559a2c" Feb 14 04:25:16 crc kubenswrapper[4867]: E0214 04:25:16.135987 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"781f5d11a5a6f97d66dc2c2ec0eae435679b8dd4779f05f225fb3ce5dd559a2c\": container with ID starting with 781f5d11a5a6f97d66dc2c2ec0eae435679b8dd4779f05f225fb3ce5dd559a2c not found: ID does not exist" containerID="781f5d11a5a6f97d66dc2c2ec0eae435679b8dd4779f05f225fb3ce5dd559a2c" Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.136020 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"781f5d11a5a6f97d66dc2c2ec0eae435679b8dd4779f05f225fb3ce5dd559a2c"} err="failed to get container status \"781f5d11a5a6f97d66dc2c2ec0eae435679b8dd4779f05f225fb3ce5dd559a2c\": rpc error: code = NotFound desc = could not find container \"781f5d11a5a6f97d66dc2c2ec0eae435679b8dd4779f05f225fb3ce5dd559a2c\": container with ID starting with 781f5d11a5a6f97d66dc2c2ec0eae435679b8dd4779f05f225fb3ce5dd559a2c not found: ID does not exist" Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.136047 4867 scope.go:117] "RemoveContainer" containerID="be841d62f3009374faac139bf7c9000724217c0d043cee6d7f13deb00ae9eb01" Feb 14 04:25:16 crc kubenswrapper[4867]: E0214 04:25:16.136375 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be841d62f3009374faac139bf7c9000724217c0d043cee6d7f13deb00ae9eb01\": container with ID starting with be841d62f3009374faac139bf7c9000724217c0d043cee6d7f13deb00ae9eb01 not found: ID does not exist" containerID="be841d62f3009374faac139bf7c9000724217c0d043cee6d7f13deb00ae9eb01" Feb 14 04:25:16 crc kubenswrapper[4867]: I0214 04:25:16.136404 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be841d62f3009374faac139bf7c9000724217c0d043cee6d7f13deb00ae9eb01"} err="failed to get container status \"be841d62f3009374faac139bf7c9000724217c0d043cee6d7f13deb00ae9eb01\": rpc error: code = NotFound desc = could not find container \"be841d62f3009374faac139bf7c9000724217c0d043cee6d7f13deb00ae9eb01\": container with ID starting with be841d62f3009374faac139bf7c9000724217c0d043cee6d7f13deb00ae9eb01 not found: ID does not exist" Feb 14 04:25:17 crc kubenswrapper[4867]: I0214 04:25:17.005915 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d98e15fa-a08a-4710-a903-60a1af5ff85c" path="/var/lib/kubelet/pods/d98e15fa-a08a-4710-a903-60a1af5ff85c/volumes" Feb 14 04:25:18 crc kubenswrapper[4867]: I0214 04:25:18.061952 4867 generic.go:334] "Generic (PLEG): container finished" podID="cc14a3a2-05fa-4675-bace-02675c564e5f" containerID="bd4ca1932fd255aa202749888d70a75889f2b31069893060643a2caae1e51f9a" exitCode=0 Feb 14 04:25:18 crc kubenswrapper[4867]: I0214 04:25:18.062089 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" event={"ID":"cc14a3a2-05fa-4675-bace-02675c564e5f","Type":"ContainerDied","Data":"bd4ca1932fd255aa202749888d70a75889f2b31069893060643a2caae1e51f9a"} Feb 14 04:25:19 crc kubenswrapper[4867]: I0214 04:25:19.072569 4867 generic.go:334] "Generic (PLEG): container finished" podID="cc14a3a2-05fa-4675-bace-02675c564e5f" containerID="4256fd9fddc4b76fc03a089854dcfa3f61c0df98de19f12dfa8e554deb082fdc" exitCode=0 Feb 14 04:25:19 crc kubenswrapper[4867]: I0214 04:25:19.072625 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" event={"ID":"cc14a3a2-05fa-4675-bace-02675c564e5f","Type":"ContainerDied","Data":"4256fd9fddc4b76fc03a089854dcfa3f61c0df98de19f12dfa8e554deb082fdc"} Feb 14 04:25:20 crc kubenswrapper[4867]: I0214 04:25:20.382843 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" Feb 14 04:25:20 crc kubenswrapper[4867]: I0214 04:25:20.482204 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cc14a3a2-05fa-4675-bace-02675c564e5f-bundle\") pod \"cc14a3a2-05fa-4675-bace-02675c564e5f\" (UID: \"cc14a3a2-05fa-4675-bace-02675c564e5f\") " Feb 14 04:25:20 crc kubenswrapper[4867]: I0214 04:25:20.482688 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47s2h\" (UniqueName: \"kubernetes.io/projected/cc14a3a2-05fa-4675-bace-02675c564e5f-kube-api-access-47s2h\") pod \"cc14a3a2-05fa-4675-bace-02675c564e5f\" (UID: \"cc14a3a2-05fa-4675-bace-02675c564e5f\") " Feb 14 04:25:20 crc kubenswrapper[4867]: I0214 04:25:20.482741 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cc14a3a2-05fa-4675-bace-02675c564e5f-util\") pod \"cc14a3a2-05fa-4675-bace-02675c564e5f\" (UID: \"cc14a3a2-05fa-4675-bace-02675c564e5f\") " Feb 14 04:25:20 crc kubenswrapper[4867]: I0214 04:25:20.484393 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc14a3a2-05fa-4675-bace-02675c564e5f-bundle" (OuterVolumeSpecName: "bundle") pod "cc14a3a2-05fa-4675-bace-02675c564e5f" (UID: "cc14a3a2-05fa-4675-bace-02675c564e5f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:25:20 crc kubenswrapper[4867]: I0214 04:25:20.489148 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc14a3a2-05fa-4675-bace-02675c564e5f-kube-api-access-47s2h" (OuterVolumeSpecName: "kube-api-access-47s2h") pod "cc14a3a2-05fa-4675-bace-02675c564e5f" (UID: "cc14a3a2-05fa-4675-bace-02675c564e5f"). InnerVolumeSpecName "kube-api-access-47s2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:25:20 crc kubenswrapper[4867]: I0214 04:25:20.511097 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc14a3a2-05fa-4675-bace-02675c564e5f-util" (OuterVolumeSpecName: "util") pod "cc14a3a2-05fa-4675-bace-02675c564e5f" (UID: "cc14a3a2-05fa-4675-bace-02675c564e5f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:25:20 crc kubenswrapper[4867]: I0214 04:25:20.583635 4867 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cc14a3a2-05fa-4675-bace-02675c564e5f-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:25:20 crc kubenswrapper[4867]: I0214 04:25:20.583689 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47s2h\" (UniqueName: \"kubernetes.io/projected/cc14a3a2-05fa-4675-bace-02675c564e5f-kube-api-access-47s2h\") on node \"crc\" DevicePath \"\"" Feb 14 04:25:20 crc kubenswrapper[4867]: I0214 04:25:20.583702 4867 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cc14a3a2-05fa-4675-bace-02675c564e5f-util\") on node \"crc\" DevicePath \"\"" Feb 14 04:25:21 crc kubenswrapper[4867]: I0214 04:25:21.100776 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" event={"ID":"cc14a3a2-05fa-4675-bace-02675c564e5f","Type":"ContainerDied","Data":"2aaee53e90cd8a02d4834edc174933e454497e41bdbb5e7b0688f330535eb7cf"} Feb 14 04:25:21 crc kubenswrapper[4867]: I0214 04:25:21.100859 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aaee53e90cd8a02d4834edc174933e454497e41bdbb5e7b0688f330535eb7cf" Feb 14 04:25:21 crc kubenswrapper[4867]: I0214 04:25:21.100882 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.053221 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-89zzb"] Feb 14 04:25:23 crc kubenswrapper[4867]: E0214 04:25:23.053809 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc14a3a2-05fa-4675-bace-02675c564e5f" containerName="extract" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.053823 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc14a3a2-05fa-4675-bace-02675c564e5f" containerName="extract" Feb 14 04:25:23 crc kubenswrapper[4867]: E0214 04:25:23.053839 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d98e15fa-a08a-4710-a903-60a1af5ff85c" containerName="registry-server" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.053845 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d98e15fa-a08a-4710-a903-60a1af5ff85c" containerName="registry-server" Feb 14 04:25:23 crc kubenswrapper[4867]: E0214 04:25:23.053865 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc14a3a2-05fa-4675-bace-02675c564e5f" containerName="util" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.053870 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc14a3a2-05fa-4675-bace-02675c564e5f" containerName="util" Feb 14 04:25:23 crc kubenswrapper[4867]: E0214 04:25:23.053881 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d98e15fa-a08a-4710-a903-60a1af5ff85c" containerName="extract-utilities" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.053887 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d98e15fa-a08a-4710-a903-60a1af5ff85c" containerName="extract-utilities" Feb 14 04:25:23 crc kubenswrapper[4867]: E0214 04:25:23.053896 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc14a3a2-05fa-4675-bace-02675c564e5f" containerName="pull" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.053902 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc14a3a2-05fa-4675-bace-02675c564e5f" containerName="pull" Feb 14 04:25:23 crc kubenswrapper[4867]: E0214 04:25:23.053914 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d98e15fa-a08a-4710-a903-60a1af5ff85c" containerName="extract-content" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.053919 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d98e15fa-a08a-4710-a903-60a1af5ff85c" containerName="extract-content" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.054044 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc14a3a2-05fa-4675-bace-02675c564e5f" containerName="extract" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.054054 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="d98e15fa-a08a-4710-a903-60a1af5ff85c" containerName="registry-server" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.055214 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.069244 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-89zzb"] Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.225643 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41593fcf-d77d-43cb-897b-bf50bbc07d31-utilities\") pod \"certified-operators-89zzb\" (UID: \"41593fcf-d77d-43cb-897b-bf50bbc07d31\") " pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.225959 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41593fcf-d77d-43cb-897b-bf50bbc07d31-catalog-content\") pod \"certified-operators-89zzb\" (UID: \"41593fcf-d77d-43cb-897b-bf50bbc07d31\") " pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.226074 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fhtd\" (UniqueName: \"kubernetes.io/projected/41593fcf-d77d-43cb-897b-bf50bbc07d31-kube-api-access-4fhtd\") pod \"certified-operators-89zzb\" (UID: \"41593fcf-d77d-43cb-897b-bf50bbc07d31\") " pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.327804 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41593fcf-d77d-43cb-897b-bf50bbc07d31-utilities\") pod \"certified-operators-89zzb\" (UID: \"41593fcf-d77d-43cb-897b-bf50bbc07d31\") " pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.327915 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41593fcf-d77d-43cb-897b-bf50bbc07d31-catalog-content\") pod \"certified-operators-89zzb\" (UID: \"41593fcf-d77d-43cb-897b-bf50bbc07d31\") " pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.327944 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fhtd\" (UniqueName: \"kubernetes.io/projected/41593fcf-d77d-43cb-897b-bf50bbc07d31-kube-api-access-4fhtd\") pod \"certified-operators-89zzb\" (UID: \"41593fcf-d77d-43cb-897b-bf50bbc07d31\") " pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.328849 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41593fcf-d77d-43cb-897b-bf50bbc07d31-utilities\") pod \"certified-operators-89zzb\" (UID: \"41593fcf-d77d-43cb-897b-bf50bbc07d31\") " pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.328882 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41593fcf-d77d-43cb-897b-bf50bbc07d31-catalog-content\") pod \"certified-operators-89zzb\" (UID: \"41593fcf-d77d-43cb-897b-bf50bbc07d31\") " pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.361665 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fhtd\" (UniqueName: \"kubernetes.io/projected/41593fcf-d77d-43cb-897b-bf50bbc07d31-kube-api-access-4fhtd\") pod \"certified-operators-89zzb\" (UID: \"41593fcf-d77d-43cb-897b-bf50bbc07d31\") " pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.377356 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:23 crc kubenswrapper[4867]: I0214 04:25:23.907184 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-89zzb"] Feb 14 04:25:24 crc kubenswrapper[4867]: I0214 04:25:24.120659 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89zzb" event={"ID":"41593fcf-d77d-43cb-897b-bf50bbc07d31","Type":"ContainerStarted","Data":"5c0f380549657313e0565dc481c122d115c86229dca3f0afe73563f2bb24adf6"} Feb 14 04:25:24 crc kubenswrapper[4867]: I0214 04:25:24.120949 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89zzb" event={"ID":"41593fcf-d77d-43cb-897b-bf50bbc07d31","Type":"ContainerStarted","Data":"b1c5650acb46edc5c20087f88f5e194cf319b08e3b50efe1315f88ecbf3e0799"} Feb 14 04:25:25 crc kubenswrapper[4867]: I0214 04:25:25.131148 4867 generic.go:334] "Generic (PLEG): container finished" podID="41593fcf-d77d-43cb-897b-bf50bbc07d31" containerID="5c0f380549657313e0565dc481c122d115c86229dca3f0afe73563f2bb24adf6" exitCode=0 Feb 14 04:25:25 crc kubenswrapper[4867]: I0214 04:25:25.131212 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89zzb" event={"ID":"41593fcf-d77d-43cb-897b-bf50bbc07d31","Type":"ContainerDied","Data":"5c0f380549657313e0565dc481c122d115c86229dca3f0afe73563f2bb24adf6"} Feb 14 04:25:26 crc kubenswrapper[4867]: I0214 04:25:26.140100 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89zzb" event={"ID":"41593fcf-d77d-43cb-897b-bf50bbc07d31","Type":"ContainerStarted","Data":"f4d29f8ea9676c2101890d6b580cc624a0fb609f17c4b40302ee52454cdc91b7"} Feb 14 04:25:26 crc kubenswrapper[4867]: E0214 04:25:26.649638 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41593fcf_d77d_43cb_897b_bf50bbc07d31.slice/crio-f4d29f8ea9676c2101890d6b580cc624a0fb609f17c4b40302ee52454cdc91b7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41593fcf_d77d_43cb_897b_bf50bbc07d31.slice/crio-conmon-f4d29f8ea9676c2101890d6b580cc624a0fb609f17c4b40302ee52454cdc91b7.scope\": RecentStats: unable to find data in memory cache]" Feb 14 04:25:27 crc kubenswrapper[4867]: I0214 04:25:27.148660 4867 generic.go:334] "Generic (PLEG): container finished" podID="41593fcf-d77d-43cb-897b-bf50bbc07d31" containerID="f4d29f8ea9676c2101890d6b580cc624a0fb609f17c4b40302ee52454cdc91b7" exitCode=0 Feb 14 04:25:27 crc kubenswrapper[4867]: I0214 04:25:27.148760 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89zzb" event={"ID":"41593fcf-d77d-43cb-897b-bf50bbc07d31","Type":"ContainerDied","Data":"f4d29f8ea9676c2101890d6b580cc624a0fb609f17c4b40302ee52454cdc91b7"} Feb 14 04:25:28 crc kubenswrapper[4867]: I0214 04:25:28.160030 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89zzb" event={"ID":"41593fcf-d77d-43cb-897b-bf50bbc07d31","Type":"ContainerStarted","Data":"84b14d36d17a6852928a4165379f01ef8bd89cd3b51c2f9a1fa85599bcd5a4af"} Feb 14 04:25:28 crc kubenswrapper[4867]: I0214 04:25:28.183983 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-89zzb" podStartSLOduration=2.787306109 podStartE2EDuration="5.1839624s" podCreationTimestamp="2026-02-14 04:25:23 +0000 UTC" firstStartedPulling="2026-02-14 04:25:25.133703945 +0000 UTC m=+957.214641259" lastFinishedPulling="2026-02-14 04:25:27.530360236 +0000 UTC m=+959.611297550" observedRunningTime="2026-02-14 04:25:28.18049824 +0000 UTC m=+960.261435584" watchObservedRunningTime="2026-02-14 04:25:28.1839624 +0000 UTC m=+960.264899714" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.413148 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-67594686f4-52kwb"] Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.415440 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.427239 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.427800 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.428135 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.428367 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.428448 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-ssl6n" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.448308 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-67594686f4-52kwb"] Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.559679 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6pxc\" (UniqueName: \"kubernetes.io/projected/e1d5f0bd-4e8c-45c7-9d4e-c530689948ad-kube-api-access-q6pxc\") pod \"metallb-operator-controller-manager-67594686f4-52kwb\" (UID: \"e1d5f0bd-4e8c-45c7-9d4e-c530689948ad\") " pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.559733 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e1d5f0bd-4e8c-45c7-9d4e-c530689948ad-webhook-cert\") pod \"metallb-operator-controller-manager-67594686f4-52kwb\" (UID: \"e1d5f0bd-4e8c-45c7-9d4e-c530689948ad\") " pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.559772 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e1d5f0bd-4e8c-45c7-9d4e-c530689948ad-apiservice-cert\") pod \"metallb-operator-controller-manager-67594686f4-52kwb\" (UID: \"e1d5f0bd-4e8c-45c7-9d4e-c530689948ad\") " pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.662785 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6pxc\" (UniqueName: \"kubernetes.io/projected/e1d5f0bd-4e8c-45c7-9d4e-c530689948ad-kube-api-access-q6pxc\") pod \"metallb-operator-controller-manager-67594686f4-52kwb\" (UID: \"e1d5f0bd-4e8c-45c7-9d4e-c530689948ad\") " pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.662834 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e1d5f0bd-4e8c-45c7-9d4e-c530689948ad-webhook-cert\") pod \"metallb-operator-controller-manager-67594686f4-52kwb\" (UID: \"e1d5f0bd-4e8c-45c7-9d4e-c530689948ad\") " pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.662868 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e1d5f0bd-4e8c-45c7-9d4e-c530689948ad-apiservice-cert\") pod \"metallb-operator-controller-manager-67594686f4-52kwb\" (UID: \"e1d5f0bd-4e8c-45c7-9d4e-c530689948ad\") " pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.671620 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e1d5f0bd-4e8c-45c7-9d4e-c530689948ad-apiservice-cert\") pod \"metallb-operator-controller-manager-67594686f4-52kwb\" (UID: \"e1d5f0bd-4e8c-45c7-9d4e-c530689948ad\") " pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.673171 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e1d5f0bd-4e8c-45c7-9d4e-c530689948ad-webhook-cert\") pod \"metallb-operator-controller-manager-67594686f4-52kwb\" (UID: \"e1d5f0bd-4e8c-45c7-9d4e-c530689948ad\") " pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.690535 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6pxc\" (UniqueName: \"kubernetes.io/projected/e1d5f0bd-4e8c-45c7-9d4e-c530689948ad-kube-api-access-q6pxc\") pod \"metallb-operator-controller-manager-67594686f4-52kwb\" (UID: \"e1d5f0bd-4e8c-45c7-9d4e-c530689948ad\") " pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.749837 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.970153 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn"] Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.971391 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.977367 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.977699 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-bk6fr" Feb 14 04:25:30 crc kubenswrapper[4867]: I0214 04:25:30.977956 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 14 04:25:31 crc kubenswrapper[4867]: I0214 04:25:31.035256 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn"] Feb 14 04:25:31 crc kubenswrapper[4867]: I0214 04:25:31.072286 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d5e9c930-96ca-4a35-af4f-b8ae033469a5-apiservice-cert\") pod \"metallb-operator-webhook-server-7f9bfb45cb-mpxbn\" (UID: \"d5e9c930-96ca-4a35-af4f-b8ae033469a5\") " pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 04:25:31 crc kubenswrapper[4867]: I0214 04:25:31.072425 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4ltd\" (UniqueName: \"kubernetes.io/projected/d5e9c930-96ca-4a35-af4f-b8ae033469a5-kube-api-access-t4ltd\") pod \"metallb-operator-webhook-server-7f9bfb45cb-mpxbn\" (UID: \"d5e9c930-96ca-4a35-af4f-b8ae033469a5\") " pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 04:25:31 crc kubenswrapper[4867]: I0214 04:25:31.072485 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d5e9c930-96ca-4a35-af4f-b8ae033469a5-webhook-cert\") pod \"metallb-operator-webhook-server-7f9bfb45cb-mpxbn\" (UID: \"d5e9c930-96ca-4a35-af4f-b8ae033469a5\") " pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 04:25:31 crc kubenswrapper[4867]: I0214 04:25:31.173570 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4ltd\" (UniqueName: \"kubernetes.io/projected/d5e9c930-96ca-4a35-af4f-b8ae033469a5-kube-api-access-t4ltd\") pod \"metallb-operator-webhook-server-7f9bfb45cb-mpxbn\" (UID: \"d5e9c930-96ca-4a35-af4f-b8ae033469a5\") " pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 04:25:31 crc kubenswrapper[4867]: I0214 04:25:31.173933 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d5e9c930-96ca-4a35-af4f-b8ae033469a5-webhook-cert\") pod \"metallb-operator-webhook-server-7f9bfb45cb-mpxbn\" (UID: \"d5e9c930-96ca-4a35-af4f-b8ae033469a5\") " pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 04:25:31 crc kubenswrapper[4867]: I0214 04:25:31.173995 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d5e9c930-96ca-4a35-af4f-b8ae033469a5-apiservice-cert\") pod \"metallb-operator-webhook-server-7f9bfb45cb-mpxbn\" (UID: \"d5e9c930-96ca-4a35-af4f-b8ae033469a5\") " pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 04:25:31 crc kubenswrapper[4867]: I0214 04:25:31.192045 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d5e9c930-96ca-4a35-af4f-b8ae033469a5-apiservice-cert\") pod \"metallb-operator-webhook-server-7f9bfb45cb-mpxbn\" (UID: \"d5e9c930-96ca-4a35-af4f-b8ae033469a5\") " pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 04:25:31 crc kubenswrapper[4867]: I0214 04:25:31.192315 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d5e9c930-96ca-4a35-af4f-b8ae033469a5-webhook-cert\") pod \"metallb-operator-webhook-server-7f9bfb45cb-mpxbn\" (UID: \"d5e9c930-96ca-4a35-af4f-b8ae033469a5\") " pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 04:25:31 crc kubenswrapper[4867]: I0214 04:25:31.197058 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4ltd\" (UniqueName: \"kubernetes.io/projected/d5e9c930-96ca-4a35-af4f-b8ae033469a5-kube-api-access-t4ltd\") pod \"metallb-operator-webhook-server-7f9bfb45cb-mpxbn\" (UID: \"d5e9c930-96ca-4a35-af4f-b8ae033469a5\") " pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 04:25:31 crc kubenswrapper[4867]: I0214 04:25:31.254700 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:25:31 crc kubenswrapper[4867]: I0214 04:25:31.254787 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:25:31 crc kubenswrapper[4867]: I0214 04:25:31.335587 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 04:25:31 crc kubenswrapper[4867]: I0214 04:25:31.519633 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-67594686f4-52kwb"] Feb 14 04:25:31 crc kubenswrapper[4867]: I0214 04:25:31.869250 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn"] Feb 14 04:25:32 crc kubenswrapper[4867]: I0214 04:25:32.203479 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" event={"ID":"d5e9c930-96ca-4a35-af4f-b8ae033469a5","Type":"ContainerStarted","Data":"7718f8a85877233a199a0d78e4a43cd0f8c75fac444005e1e147a286cedb7377"} Feb 14 04:25:32 crc kubenswrapper[4867]: I0214 04:25:32.205368 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" event={"ID":"e1d5f0bd-4e8c-45c7-9d4e-c530689948ad","Type":"ContainerStarted","Data":"5be51d0e0c6b771905fdca56951d824129bce28d1ecefdf5c2b307a204fea993"} Feb 14 04:25:33 crc kubenswrapper[4867]: I0214 04:25:33.378263 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:33 crc kubenswrapper[4867]: I0214 04:25:33.384671 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:33 crc kubenswrapper[4867]: I0214 04:25:33.457212 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:34 crc kubenswrapper[4867]: I0214 04:25:34.313847 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:36 crc kubenswrapper[4867]: I0214 04:25:36.284870 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" event={"ID":"e1d5f0bd-4e8c-45c7-9d4e-c530689948ad","Type":"ContainerStarted","Data":"4de37120723c6ceb858cc27ed5593f4b0f873f34286ef080ea925db6e29ad027"} Feb 14 04:25:36 crc kubenswrapper[4867]: I0214 04:25:36.307345 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" podStartSLOduration=2.279931092 podStartE2EDuration="6.307328192s" podCreationTimestamp="2026-02-14 04:25:30 +0000 UTC" firstStartedPulling="2026-02-14 04:25:31.534052791 +0000 UTC m=+963.614990105" lastFinishedPulling="2026-02-14 04:25:35.561449891 +0000 UTC m=+967.642387205" observedRunningTime="2026-02-14 04:25:36.303942434 +0000 UTC m=+968.384879748" watchObservedRunningTime="2026-02-14 04:25:36.307328192 +0000 UTC m=+968.388265506" Feb 14 04:25:36 crc kubenswrapper[4867]: I0214 04:25:36.648941 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-89zzb"] Feb 14 04:25:37 crc kubenswrapper[4867]: I0214 04:25:37.291665 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" Feb 14 04:25:37 crc kubenswrapper[4867]: I0214 04:25:37.291849 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-89zzb" podUID="41593fcf-d77d-43cb-897b-bf50bbc07d31" containerName="registry-server" containerID="cri-o://84b14d36d17a6852928a4165379f01ef8bd89cd3b51c2f9a1fa85599bcd5a4af" gracePeriod=2 Feb 14 04:25:38 crc kubenswrapper[4867]: I0214 04:25:38.312164 4867 generic.go:334] "Generic (PLEG): container finished" podID="41593fcf-d77d-43cb-897b-bf50bbc07d31" containerID="84b14d36d17a6852928a4165379f01ef8bd89cd3b51c2f9a1fa85599bcd5a4af" exitCode=0 Feb 14 04:25:38 crc kubenswrapper[4867]: I0214 04:25:38.312246 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89zzb" event={"ID":"41593fcf-d77d-43cb-897b-bf50bbc07d31","Type":"ContainerDied","Data":"84b14d36d17a6852928a4165379f01ef8bd89cd3b51c2f9a1fa85599bcd5a4af"} Feb 14 04:25:38 crc kubenswrapper[4867]: I0214 04:25:38.580870 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:38 crc kubenswrapper[4867]: I0214 04:25:38.730808 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fhtd\" (UniqueName: \"kubernetes.io/projected/41593fcf-d77d-43cb-897b-bf50bbc07d31-kube-api-access-4fhtd\") pod \"41593fcf-d77d-43cb-897b-bf50bbc07d31\" (UID: \"41593fcf-d77d-43cb-897b-bf50bbc07d31\") " Feb 14 04:25:38 crc kubenswrapper[4867]: I0214 04:25:38.730944 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41593fcf-d77d-43cb-897b-bf50bbc07d31-utilities\") pod \"41593fcf-d77d-43cb-897b-bf50bbc07d31\" (UID: \"41593fcf-d77d-43cb-897b-bf50bbc07d31\") " Feb 14 04:25:38 crc kubenswrapper[4867]: I0214 04:25:38.731030 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41593fcf-d77d-43cb-897b-bf50bbc07d31-catalog-content\") pod \"41593fcf-d77d-43cb-897b-bf50bbc07d31\" (UID: \"41593fcf-d77d-43cb-897b-bf50bbc07d31\") " Feb 14 04:25:38 crc kubenswrapper[4867]: I0214 04:25:38.732056 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41593fcf-d77d-43cb-897b-bf50bbc07d31-utilities" (OuterVolumeSpecName: "utilities") pod "41593fcf-d77d-43cb-897b-bf50bbc07d31" (UID: "41593fcf-d77d-43cb-897b-bf50bbc07d31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:25:38 crc kubenswrapper[4867]: I0214 04:25:38.740438 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41593fcf-d77d-43cb-897b-bf50bbc07d31-kube-api-access-4fhtd" (OuterVolumeSpecName: "kube-api-access-4fhtd") pod "41593fcf-d77d-43cb-897b-bf50bbc07d31" (UID: "41593fcf-d77d-43cb-897b-bf50bbc07d31"). InnerVolumeSpecName "kube-api-access-4fhtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:25:38 crc kubenswrapper[4867]: I0214 04:25:38.783038 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41593fcf-d77d-43cb-897b-bf50bbc07d31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41593fcf-d77d-43cb-897b-bf50bbc07d31" (UID: "41593fcf-d77d-43cb-897b-bf50bbc07d31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:25:38 crc kubenswrapper[4867]: I0214 04:25:38.832973 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fhtd\" (UniqueName: \"kubernetes.io/projected/41593fcf-d77d-43cb-897b-bf50bbc07d31-kube-api-access-4fhtd\") on node \"crc\" DevicePath \"\"" Feb 14 04:25:38 crc kubenswrapper[4867]: I0214 04:25:38.833019 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41593fcf-d77d-43cb-897b-bf50bbc07d31-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:25:38 crc kubenswrapper[4867]: I0214 04:25:38.833031 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41593fcf-d77d-43cb-897b-bf50bbc07d31-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:25:39 crc kubenswrapper[4867]: E0214 04:25:39.104617 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41593fcf_d77d_43cb_897b_bf50bbc07d31.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41593fcf_d77d_43cb_897b_bf50bbc07d31.slice/crio-b1c5650acb46edc5c20087f88f5e194cf319b08e3b50efe1315f88ecbf3e0799\": RecentStats: unable to find data in memory cache]" Feb 14 04:25:39 crc kubenswrapper[4867]: I0214 04:25:39.321276 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-89zzb" event={"ID":"41593fcf-d77d-43cb-897b-bf50bbc07d31","Type":"ContainerDied","Data":"b1c5650acb46edc5c20087f88f5e194cf319b08e3b50efe1315f88ecbf3e0799"} Feb 14 04:25:39 crc kubenswrapper[4867]: I0214 04:25:39.321336 4867 scope.go:117] "RemoveContainer" containerID="84b14d36d17a6852928a4165379f01ef8bd89cd3b51c2f9a1fa85599bcd5a4af" Feb 14 04:25:39 crc kubenswrapper[4867]: I0214 04:25:39.321331 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-89zzb" Feb 14 04:25:39 crc kubenswrapper[4867]: I0214 04:25:39.323219 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" event={"ID":"d5e9c930-96ca-4a35-af4f-b8ae033469a5","Type":"ContainerStarted","Data":"7b47d8831936f974296fa5b46313134eee7c7016a1d36736b8027bb6454a7f66"} Feb 14 04:25:39 crc kubenswrapper[4867]: I0214 04:25:39.323453 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 04:25:39 crc kubenswrapper[4867]: I0214 04:25:39.340049 4867 scope.go:117] "RemoveContainer" containerID="f4d29f8ea9676c2101890d6b580cc624a0fb609f17c4b40302ee52454cdc91b7" Feb 14 04:25:39 crc kubenswrapper[4867]: I0214 04:25:39.341732 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-89zzb"] Feb 14 04:25:39 crc kubenswrapper[4867]: I0214 04:25:39.359682 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-89zzb"] Feb 14 04:25:39 crc kubenswrapper[4867]: I0214 04:25:39.361075 4867 scope.go:117] "RemoveContainer" containerID="5c0f380549657313e0565dc481c122d115c86229dca3f0afe73563f2bb24adf6" Feb 14 04:25:39 crc kubenswrapper[4867]: I0214 04:25:39.375712 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" podStartSLOduration=2.983757409 podStartE2EDuration="9.375689846s" podCreationTimestamp="2026-02-14 04:25:30 +0000 UTC" firstStartedPulling="2026-02-14 04:25:31.880786485 +0000 UTC m=+963.961723799" lastFinishedPulling="2026-02-14 04:25:38.272718922 +0000 UTC m=+970.353656236" observedRunningTime="2026-02-14 04:25:39.369718751 +0000 UTC m=+971.450656065" watchObservedRunningTime="2026-02-14 04:25:39.375689846 +0000 UTC m=+971.456627160" Feb 14 04:25:41 crc kubenswrapper[4867]: I0214 04:25:41.006805 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41593fcf-d77d-43cb-897b-bf50bbc07d31" path="/var/lib/kubelet/pods/41593fcf-d77d-43cb-897b-bf50bbc07d31/volumes" Feb 14 04:25:51 crc kubenswrapper[4867]: I0214 04:25:51.343881 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 04:26:01 crc kubenswrapper[4867]: I0214 04:26:01.250427 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:26:01 crc kubenswrapper[4867]: I0214 04:26:01.251019 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:26:01 crc kubenswrapper[4867]: I0214 04:26:01.251062 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:26:01 crc kubenswrapper[4867]: I0214 04:26:01.251749 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ce87267e4cadbd1bac903bbe9da7eec07159552420bcd52dda15fc535f1ace5"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 04:26:01 crc kubenswrapper[4867]: I0214 04:26:01.251798 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://3ce87267e4cadbd1bac903bbe9da7eec07159552420bcd52dda15fc535f1ace5" gracePeriod=600 Feb 14 04:26:01 crc kubenswrapper[4867]: I0214 04:26:01.485779 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="3ce87267e4cadbd1bac903bbe9da7eec07159552420bcd52dda15fc535f1ace5" exitCode=0 Feb 14 04:26:01 crc kubenswrapper[4867]: I0214 04:26:01.485826 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"3ce87267e4cadbd1bac903bbe9da7eec07159552420bcd52dda15fc535f1ace5"} Feb 14 04:26:01 crc kubenswrapper[4867]: I0214 04:26:01.486245 4867 scope.go:117] "RemoveContainer" containerID="51f114f48cb9a2cff6d859aa7aea42ea438df249b54ac2cc89b9fb1c0a39a59a" Feb 14 04:26:02 crc kubenswrapper[4867]: I0214 04:26:02.533678 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"a6dbe719cdc073fcc8481a2727f00815982a8bd61b2cd10d4229a11b7b5cb46c"} Feb 14 04:26:10 crc kubenswrapper[4867]: I0214 04:26:10.753491 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.481349 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-nzdwg"] Feb 14 04:26:11 crc kubenswrapper[4867]: E0214 04:26:11.481737 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41593fcf-d77d-43cb-897b-bf50bbc07d31" containerName="registry-server" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.481758 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="41593fcf-d77d-43cb-897b-bf50bbc07d31" containerName="registry-server" Feb 14 04:26:11 crc kubenswrapper[4867]: E0214 04:26:11.481793 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41593fcf-d77d-43cb-897b-bf50bbc07d31" containerName="extract-utilities" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.481802 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="41593fcf-d77d-43cb-897b-bf50bbc07d31" containerName="extract-utilities" Feb 14 04:26:11 crc kubenswrapper[4867]: E0214 04:26:11.481816 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41593fcf-d77d-43cb-897b-bf50bbc07d31" containerName="extract-content" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.481826 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="41593fcf-d77d-43cb-897b-bf50bbc07d31" containerName="extract-content" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.482004 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="41593fcf-d77d-43cb-897b-bf50bbc07d31" containerName="registry-server" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.485445 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.489906 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-gpnt5" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.490219 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.490401 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.499089 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb"] Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.500345 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.501836 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.541561 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb"] Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.585336 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/cfde5532-97c7-47b8-8b63-0159fc9e82b9-reloader\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.585656 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/cfde5532-97c7-47b8-8b63-0159fc9e82b9-metrics\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.585703 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/cfde5532-97c7-47b8-8b63-0159fc9e82b9-frr-sockets\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.585749 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/85e0628d-4132-4c09-9da0-35db43024c9c-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-9gqfb\" (UID: \"85e0628d-4132-4c09-9da0-35db43024c9c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.585789 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/cfde5532-97c7-47b8-8b63-0159fc9e82b9-frr-conf\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.585813 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fmk4\" (UniqueName: \"kubernetes.io/projected/cfde5532-97c7-47b8-8b63-0159fc9e82b9-kube-api-access-2fmk4\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.585853 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/cfde5532-97c7-47b8-8b63-0159fc9e82b9-frr-startup\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.585879 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4b4d\" (UniqueName: \"kubernetes.io/projected/85e0628d-4132-4c09-9da0-35db43024c9c-kube-api-access-x4b4d\") pod \"frr-k8s-webhook-server-78b44bf5bb-9gqfb\" (UID: \"85e0628d-4132-4c09-9da0-35db43024c9c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.585936 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cfde5532-97c7-47b8-8b63-0159fc9e82b9-metrics-certs\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.605716 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-4hvw7"] Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.619987 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-4hvw7" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.623996 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-tv9sc" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.624161 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.624319 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.624450 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.624879 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-zhmxc"] Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.626432 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-zhmxc" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.630646 4867 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.645024 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-zhmxc"] Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.686976 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/516cf204-1263-431e-a450-039739b0d925-cert\") pod \"controller-69bbfbf88f-zhmxc\" (UID: \"516cf204-1263-431e-a450-039739b0d925\") " pod="metallb-system/controller-69bbfbf88f-zhmxc" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.687068 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/cfde5532-97c7-47b8-8b63-0159fc9e82b9-metrics\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.687095 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/cfde5532-97c7-47b8-8b63-0159fc9e82b9-frr-sockets\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.687124 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm68d\" (UniqueName: \"kubernetes.io/projected/516cf204-1263-431e-a450-039739b0d925-kube-api-access-gm68d\") pod \"controller-69bbfbf88f-zhmxc\" (UID: \"516cf204-1263-431e-a450-039739b0d925\") " pod="metallb-system/controller-69bbfbf88f-zhmxc" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.687147 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/516cf204-1263-431e-a450-039739b0d925-metrics-certs\") pod \"controller-69bbfbf88f-zhmxc\" (UID: \"516cf204-1263-431e-a450-039739b0d925\") " pod="metallb-system/controller-69bbfbf88f-zhmxc" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.687171 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/85e0628d-4132-4c09-9da0-35db43024c9c-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-9gqfb\" (UID: \"85e0628d-4132-4c09-9da0-35db43024c9c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.687197 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-metrics-certs\") pod \"speaker-4hvw7\" (UID: \"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8\") " pod="metallb-system/speaker-4hvw7" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.687212 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvmd9\" (UniqueName: \"kubernetes.io/projected/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-kube-api-access-qvmd9\") pod \"speaker-4hvw7\" (UID: \"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8\") " pod="metallb-system/speaker-4hvw7" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.687234 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-metallb-excludel2\") pod \"speaker-4hvw7\" (UID: \"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8\") " pod="metallb-system/speaker-4hvw7" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.687252 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fmk4\" (UniqueName: \"kubernetes.io/projected/cfde5532-97c7-47b8-8b63-0159fc9e82b9-kube-api-access-2fmk4\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.687268 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/cfde5532-97c7-47b8-8b63-0159fc9e82b9-frr-conf\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.687296 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/cfde5532-97c7-47b8-8b63-0159fc9e82b9-frr-startup\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.687317 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4b4d\" (UniqueName: \"kubernetes.io/projected/85e0628d-4132-4c09-9da0-35db43024c9c-kube-api-access-x4b4d\") pod \"frr-k8s-webhook-server-78b44bf5bb-9gqfb\" (UID: \"85e0628d-4132-4c09-9da0-35db43024c9c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.687343 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-memberlist\") pod \"speaker-4hvw7\" (UID: \"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8\") " pod="metallb-system/speaker-4hvw7" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.687375 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cfde5532-97c7-47b8-8b63-0159fc9e82b9-metrics-certs\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.687402 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/cfde5532-97c7-47b8-8b63-0159fc9e82b9-reloader\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.687823 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/cfde5532-97c7-47b8-8b63-0159fc9e82b9-reloader\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.688016 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/cfde5532-97c7-47b8-8b63-0159fc9e82b9-metrics\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.688186 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/cfde5532-97c7-47b8-8b63-0159fc9e82b9-frr-sockets\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.691871 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/cfde5532-97c7-47b8-8b63-0159fc9e82b9-frr-conf\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.693238 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/cfde5532-97c7-47b8-8b63-0159fc9e82b9-frr-startup\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: E0214 04:26:11.693417 4867 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 14 04:26:11 crc kubenswrapper[4867]: E0214 04:26:11.693499 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cfde5532-97c7-47b8-8b63-0159fc9e82b9-metrics-certs podName:cfde5532-97c7-47b8-8b63-0159fc9e82b9 nodeName:}" failed. No retries permitted until 2026-02-14 04:26:12.19347867 +0000 UTC m=+1004.274416054 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cfde5532-97c7-47b8-8b63-0159fc9e82b9-metrics-certs") pod "frr-k8s-nzdwg" (UID: "cfde5532-97c7-47b8-8b63-0159fc9e82b9") : secret "frr-k8s-certs-secret" not found Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.693970 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/85e0628d-4132-4c09-9da0-35db43024c9c-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-9gqfb\" (UID: \"85e0628d-4132-4c09-9da0-35db43024c9c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.714722 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fmk4\" (UniqueName: \"kubernetes.io/projected/cfde5532-97c7-47b8-8b63-0159fc9e82b9-kube-api-access-2fmk4\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.717805 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4b4d\" (UniqueName: \"kubernetes.io/projected/85e0628d-4132-4c09-9da0-35db43024c9c-kube-api-access-x4b4d\") pod \"frr-k8s-webhook-server-78b44bf5bb-9gqfb\" (UID: \"85e0628d-4132-4c09-9da0-35db43024c9c\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.788946 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-metrics-certs\") pod \"speaker-4hvw7\" (UID: \"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8\") " pod="metallb-system/speaker-4hvw7" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.789330 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvmd9\" (UniqueName: \"kubernetes.io/projected/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-kube-api-access-qvmd9\") pod \"speaker-4hvw7\" (UID: \"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8\") " pod="metallb-system/speaker-4hvw7" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.789364 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-metallb-excludel2\") pod \"speaker-4hvw7\" (UID: \"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8\") " pod="metallb-system/speaker-4hvw7" Feb 14 04:26:11 crc kubenswrapper[4867]: E0214 04:26:11.789143 4867 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 14 04:26:11 crc kubenswrapper[4867]: E0214 04:26:11.789469 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-metrics-certs podName:6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8 nodeName:}" failed. No retries permitted until 2026-02-14 04:26:12.289445951 +0000 UTC m=+1004.370383265 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-metrics-certs") pod "speaker-4hvw7" (UID: "6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8") : secret "speaker-certs-secret" not found Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.789496 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-memberlist\") pod \"speaker-4hvw7\" (UID: \"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8\") " pod="metallb-system/speaker-4hvw7" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.789742 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/516cf204-1263-431e-a450-039739b0d925-cert\") pod \"controller-69bbfbf88f-zhmxc\" (UID: \"516cf204-1263-431e-a450-039739b0d925\") " pod="metallb-system/controller-69bbfbf88f-zhmxc" Feb 14 04:26:11 crc kubenswrapper[4867]: E0214 04:26:11.789796 4867 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.789878 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm68d\" (UniqueName: \"kubernetes.io/projected/516cf204-1263-431e-a450-039739b0d925-kube-api-access-gm68d\") pod \"controller-69bbfbf88f-zhmxc\" (UID: \"516cf204-1263-431e-a450-039739b0d925\") " pod="metallb-system/controller-69bbfbf88f-zhmxc" Feb 14 04:26:11 crc kubenswrapper[4867]: E0214 04:26:11.789905 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-memberlist podName:6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8 nodeName:}" failed. No retries permitted until 2026-02-14 04:26:12.289881553 +0000 UTC m=+1004.370818967 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-memberlist") pod "speaker-4hvw7" (UID: "6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8") : secret "metallb-memberlist" not found Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.789942 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/516cf204-1263-431e-a450-039739b0d925-metrics-certs\") pod \"controller-69bbfbf88f-zhmxc\" (UID: \"516cf204-1263-431e-a450-039739b0d925\") " pod="metallb-system/controller-69bbfbf88f-zhmxc" Feb 14 04:26:11 crc kubenswrapper[4867]: E0214 04:26:11.790052 4867 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 14 04:26:11 crc kubenswrapper[4867]: E0214 04:26:11.790101 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/516cf204-1263-431e-a450-039739b0d925-metrics-certs podName:516cf204-1263-431e-a450-039739b0d925 nodeName:}" failed. No retries permitted until 2026-02-14 04:26:12.290090288 +0000 UTC m=+1004.371027612 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/516cf204-1263-431e-a450-039739b0d925-metrics-certs") pod "controller-69bbfbf88f-zhmxc" (UID: "516cf204-1263-431e-a450-039739b0d925") : secret "controller-certs-secret" not found Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.790785 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-metallb-excludel2\") pod \"speaker-4hvw7\" (UID: \"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8\") " pod="metallb-system/speaker-4hvw7" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.808317 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/516cf204-1263-431e-a450-039739b0d925-cert\") pod \"controller-69bbfbf88f-zhmxc\" (UID: \"516cf204-1263-431e-a450-039739b0d925\") " pod="metallb-system/controller-69bbfbf88f-zhmxc" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.820589 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvmd9\" (UniqueName: \"kubernetes.io/projected/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-kube-api-access-qvmd9\") pod \"speaker-4hvw7\" (UID: \"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8\") " pod="metallb-system/speaker-4hvw7" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.836662 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm68d\" (UniqueName: \"kubernetes.io/projected/516cf204-1263-431e-a450-039739b0d925-kube-api-access-gm68d\") pod \"controller-69bbfbf88f-zhmxc\" (UID: \"516cf204-1263-431e-a450-039739b0d925\") " pod="metallb-system/controller-69bbfbf88f-zhmxc" Feb 14 04:26:11 crc kubenswrapper[4867]: I0214 04:26:11.844345 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" Feb 14 04:26:12 crc kubenswrapper[4867]: I0214 04:26:12.196307 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cfde5532-97c7-47b8-8b63-0159fc9e82b9-metrics-certs\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:12 crc kubenswrapper[4867]: I0214 04:26:12.201710 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cfde5532-97c7-47b8-8b63-0159fc9e82b9-metrics-certs\") pod \"frr-k8s-nzdwg\" (UID: \"cfde5532-97c7-47b8-8b63-0159fc9e82b9\") " pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:12 crc kubenswrapper[4867]: I0214 04:26:12.273275 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb"] Feb 14 04:26:12 crc kubenswrapper[4867]: I0214 04:26:12.297637 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-memberlist\") pod \"speaker-4hvw7\" (UID: \"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8\") " pod="metallb-system/speaker-4hvw7" Feb 14 04:26:12 crc kubenswrapper[4867]: I0214 04:26:12.297763 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/516cf204-1263-431e-a450-039739b0d925-metrics-certs\") pod \"controller-69bbfbf88f-zhmxc\" (UID: \"516cf204-1263-431e-a450-039739b0d925\") " pod="metallb-system/controller-69bbfbf88f-zhmxc" Feb 14 04:26:12 crc kubenswrapper[4867]: I0214 04:26:12.297798 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-metrics-certs\") pod \"speaker-4hvw7\" (UID: \"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8\") " pod="metallb-system/speaker-4hvw7" Feb 14 04:26:12 crc kubenswrapper[4867]: E0214 04:26:12.302722 4867 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 14 04:26:12 crc kubenswrapper[4867]: E0214 04:26:12.302832 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-memberlist podName:6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8 nodeName:}" failed. No retries permitted until 2026-02-14 04:26:13.302806726 +0000 UTC m=+1005.383744080 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-memberlist") pod "speaker-4hvw7" (UID: "6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8") : secret "metallb-memberlist" not found Feb 14 04:26:12 crc kubenswrapper[4867]: I0214 04:26:12.310216 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-metrics-certs\") pod \"speaker-4hvw7\" (UID: \"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8\") " pod="metallb-system/speaker-4hvw7" Feb 14 04:26:12 crc kubenswrapper[4867]: I0214 04:26:12.310263 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/516cf204-1263-431e-a450-039739b0d925-metrics-certs\") pod \"controller-69bbfbf88f-zhmxc\" (UID: \"516cf204-1263-431e-a450-039739b0d925\") " pod="metallb-system/controller-69bbfbf88f-zhmxc" Feb 14 04:26:12 crc kubenswrapper[4867]: I0214 04:26:12.414085 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:12 crc kubenswrapper[4867]: I0214 04:26:12.584076 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-zhmxc" Feb 14 04:26:12 crc kubenswrapper[4867]: I0214 04:26:12.601703 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nzdwg" event={"ID":"cfde5532-97c7-47b8-8b63-0159fc9e82b9","Type":"ContainerStarted","Data":"f0f07f92e5b1e4236153a02b0c2fb464b5e43abca36d508342ba96642bd11950"} Feb 14 04:26:12 crc kubenswrapper[4867]: I0214 04:26:12.602689 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" event={"ID":"85e0628d-4132-4c09-9da0-35db43024c9c","Type":"ContainerStarted","Data":"ce77dc003a1565cbaecc3f50e4f0d210e45e322dbcb4cc8aa0c95512aa6a94b8"} Feb 14 04:26:13 crc kubenswrapper[4867]: I0214 04:26:13.009130 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-zhmxc"] Feb 14 04:26:13 crc kubenswrapper[4867]: W0214 04:26:13.016687 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod516cf204_1263_431e_a450_039739b0d925.slice/crio-5db76231faba5162eef42083f43337039ed7e39a0d2c457e34f80b3c9d246a39 WatchSource:0}: Error finding container 5db76231faba5162eef42083f43337039ed7e39a0d2c457e34f80b3c9d246a39: Status 404 returned error can't find the container with id 5db76231faba5162eef42083f43337039ed7e39a0d2c457e34f80b3c9d246a39 Feb 14 04:26:13 crc kubenswrapper[4867]: I0214 04:26:13.316540 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-memberlist\") pod \"speaker-4hvw7\" (UID: \"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8\") " pod="metallb-system/speaker-4hvw7" Feb 14 04:26:13 crc kubenswrapper[4867]: I0214 04:26:13.322903 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8-memberlist\") pod \"speaker-4hvw7\" (UID: \"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8\") " pod="metallb-system/speaker-4hvw7" Feb 14 04:26:13 crc kubenswrapper[4867]: I0214 04:26:13.473424 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-4hvw7" Feb 14 04:26:13 crc kubenswrapper[4867]: W0214 04:26:13.498410 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e0a7a97_9ea6_4dcf_85a4_995d891fa5f8.slice/crio-0ed5654043fca3cf04be83ab8bf5856a5166c0e07070e97c6f242745ca28bd50 WatchSource:0}: Error finding container 0ed5654043fca3cf04be83ab8bf5856a5166c0e07070e97c6f242745ca28bd50: Status 404 returned error can't find the container with id 0ed5654043fca3cf04be83ab8bf5856a5166c0e07070e97c6f242745ca28bd50 Feb 14 04:26:13 crc kubenswrapper[4867]: I0214 04:26:13.628651 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-zhmxc" event={"ID":"516cf204-1263-431e-a450-039739b0d925","Type":"ContainerStarted","Data":"e7c8076069c83a4d5e444b60b9e3f64f117dacf01a093cfeed7b95ebb0df2e1d"} Feb 14 04:26:13 crc kubenswrapper[4867]: I0214 04:26:13.628710 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-zhmxc" event={"ID":"516cf204-1263-431e-a450-039739b0d925","Type":"ContainerStarted","Data":"4bbf9b9014a8149d15e6f79b0dcdd17d692b22b97863b761a75ad9d86bb21987"} Feb 14 04:26:13 crc kubenswrapper[4867]: I0214 04:26:13.628724 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-zhmxc" event={"ID":"516cf204-1263-431e-a450-039739b0d925","Type":"ContainerStarted","Data":"5db76231faba5162eef42083f43337039ed7e39a0d2c457e34f80b3c9d246a39"} Feb 14 04:26:13 crc kubenswrapper[4867]: I0214 04:26:13.629844 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-zhmxc" Feb 14 04:26:13 crc kubenswrapper[4867]: I0214 04:26:13.632234 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4hvw7" event={"ID":"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8","Type":"ContainerStarted","Data":"0ed5654043fca3cf04be83ab8bf5856a5166c0e07070e97c6f242745ca28bd50"} Feb 14 04:26:13 crc kubenswrapper[4867]: I0214 04:26:13.652094 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-zhmxc" podStartSLOduration=2.652047465 podStartE2EDuration="2.652047465s" podCreationTimestamp="2026-02-14 04:26:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:26:13.649810387 +0000 UTC m=+1005.730747701" watchObservedRunningTime="2026-02-14 04:26:13.652047465 +0000 UTC m=+1005.732984779" Feb 14 04:26:14 crc kubenswrapper[4867]: I0214 04:26:14.659375 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4hvw7" event={"ID":"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8","Type":"ContainerStarted","Data":"a6e42e3026e062a43a4b38b44ad77704843728f5218e54cbfd71ef805c27bacb"} Feb 14 04:26:14 crc kubenswrapper[4867]: I0214 04:26:14.659740 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4hvw7" event={"ID":"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8","Type":"ContainerStarted","Data":"1c50e8be32836da6fce22b59341f0df53ed1589043997f275a93de461dc1feea"} Feb 14 04:26:14 crc kubenswrapper[4867]: I0214 04:26:14.712786 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-4hvw7" podStartSLOduration=3.712761984 podStartE2EDuration="3.712761984s" podCreationTimestamp="2026-02-14 04:26:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:26:14.702964831 +0000 UTC m=+1006.783902145" watchObservedRunningTime="2026-02-14 04:26:14.712761984 +0000 UTC m=+1006.793699318" Feb 14 04:26:15 crc kubenswrapper[4867]: I0214 04:26:15.667650 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-4hvw7" Feb 14 04:26:22 crc kubenswrapper[4867]: I0214 04:26:22.754906 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" event={"ID":"85e0628d-4132-4c09-9da0-35db43024c9c","Type":"ContainerStarted","Data":"e4c58a36f0ba8ec1610fa373ec1045e46fc1fd0f54e17718ead321d3a683914d"} Feb 14 04:26:22 crc kubenswrapper[4867]: I0214 04:26:22.756765 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" Feb 14 04:26:22 crc kubenswrapper[4867]: I0214 04:26:22.758436 4867 generic.go:334] "Generic (PLEG): container finished" podID="cfde5532-97c7-47b8-8b63-0159fc9e82b9" containerID="0aea0bb3b1a2276d3b97fea97e62516551b7b690f473022c0b6928d6ab7538ff" exitCode=0 Feb 14 04:26:22 crc kubenswrapper[4867]: I0214 04:26:22.758470 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nzdwg" event={"ID":"cfde5532-97c7-47b8-8b63-0159fc9e82b9","Type":"ContainerDied","Data":"0aea0bb3b1a2276d3b97fea97e62516551b7b690f473022c0b6928d6ab7538ff"} Feb 14 04:26:22 crc kubenswrapper[4867]: I0214 04:26:22.793098 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" podStartSLOduration=2.38072621 podStartE2EDuration="11.79307356s" podCreationTimestamp="2026-02-14 04:26:11 +0000 UTC" firstStartedPulling="2026-02-14 04:26:12.276418414 +0000 UTC m=+1004.357355728" lastFinishedPulling="2026-02-14 04:26:21.688765764 +0000 UTC m=+1013.769703078" observedRunningTime="2026-02-14 04:26:22.788020329 +0000 UTC m=+1014.868957643" watchObservedRunningTime="2026-02-14 04:26:22.79307356 +0000 UTC m=+1014.874010874" Feb 14 04:26:22 crc kubenswrapper[4867]: I0214 04:26:22.932583 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5crd9"] Feb 14 04:26:22 crc kubenswrapper[4867]: I0214 04:26:22.935708 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:22 crc kubenswrapper[4867]: I0214 04:26:22.957687 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87fbab35-1a29-4dcd-94fd-b8d663b73622-utilities\") pod \"community-operators-5crd9\" (UID: \"87fbab35-1a29-4dcd-94fd-b8d663b73622\") " pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:22 crc kubenswrapper[4867]: I0214 04:26:22.957879 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87fbab35-1a29-4dcd-94fd-b8d663b73622-catalog-content\") pod \"community-operators-5crd9\" (UID: \"87fbab35-1a29-4dcd-94fd-b8d663b73622\") " pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:22 crc kubenswrapper[4867]: I0214 04:26:22.957975 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67bfl\" (UniqueName: \"kubernetes.io/projected/87fbab35-1a29-4dcd-94fd-b8d663b73622-kube-api-access-67bfl\") pod \"community-operators-5crd9\" (UID: \"87fbab35-1a29-4dcd-94fd-b8d663b73622\") " pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:22 crc kubenswrapper[4867]: I0214 04:26:22.978248 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5crd9"] Feb 14 04:26:23 crc kubenswrapper[4867]: I0214 04:26:23.059788 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87fbab35-1a29-4dcd-94fd-b8d663b73622-utilities\") pod \"community-operators-5crd9\" (UID: \"87fbab35-1a29-4dcd-94fd-b8d663b73622\") " pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:23 crc kubenswrapper[4867]: I0214 04:26:23.059890 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87fbab35-1a29-4dcd-94fd-b8d663b73622-catalog-content\") pod \"community-operators-5crd9\" (UID: \"87fbab35-1a29-4dcd-94fd-b8d663b73622\") " pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:23 crc kubenswrapper[4867]: I0214 04:26:23.060616 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67bfl\" (UniqueName: \"kubernetes.io/projected/87fbab35-1a29-4dcd-94fd-b8d663b73622-kube-api-access-67bfl\") pod \"community-operators-5crd9\" (UID: \"87fbab35-1a29-4dcd-94fd-b8d663b73622\") " pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:23 crc kubenswrapper[4867]: I0214 04:26:23.060696 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87fbab35-1a29-4dcd-94fd-b8d663b73622-catalog-content\") pod \"community-operators-5crd9\" (UID: \"87fbab35-1a29-4dcd-94fd-b8d663b73622\") " pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:23 crc kubenswrapper[4867]: I0214 04:26:23.060767 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87fbab35-1a29-4dcd-94fd-b8d663b73622-utilities\") pod \"community-operators-5crd9\" (UID: \"87fbab35-1a29-4dcd-94fd-b8d663b73622\") " pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:23 crc kubenswrapper[4867]: I0214 04:26:23.080782 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67bfl\" (UniqueName: \"kubernetes.io/projected/87fbab35-1a29-4dcd-94fd-b8d663b73622-kube-api-access-67bfl\") pod \"community-operators-5crd9\" (UID: \"87fbab35-1a29-4dcd-94fd-b8d663b73622\") " pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:23 crc kubenswrapper[4867]: I0214 04:26:23.286567 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:23 crc kubenswrapper[4867]: I0214 04:26:23.484098 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-4hvw7" Feb 14 04:26:23 crc kubenswrapper[4867]: I0214 04:26:23.767200 4867 generic.go:334] "Generic (PLEG): container finished" podID="cfde5532-97c7-47b8-8b63-0159fc9e82b9" containerID="cd9b88599d43ea9f82cd648a739f3263ee4fed536da4d246c5ad2c6864aad0a0" exitCode=0 Feb 14 04:26:23 crc kubenswrapper[4867]: I0214 04:26:23.767248 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nzdwg" event={"ID":"cfde5532-97c7-47b8-8b63-0159fc9e82b9","Type":"ContainerDied","Data":"cd9b88599d43ea9f82cd648a739f3263ee4fed536da4d246c5ad2c6864aad0a0"} Feb 14 04:26:23 crc kubenswrapper[4867]: I0214 04:26:23.825554 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5crd9"] Feb 14 04:26:23 crc kubenswrapper[4867]: W0214 04:26:23.830750 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87fbab35_1a29_4dcd_94fd_b8d663b73622.slice/crio-de1d5c3a5756f6da67db095fba834d9ec14cf3329af840e674b920ee7c05505b WatchSource:0}: Error finding container de1d5c3a5756f6da67db095fba834d9ec14cf3329af840e674b920ee7c05505b: Status 404 returned error can't find the container with id de1d5c3a5756f6da67db095fba834d9ec14cf3329af840e674b920ee7c05505b Feb 14 04:26:24 crc kubenswrapper[4867]: I0214 04:26:24.780404 4867 generic.go:334] "Generic (PLEG): container finished" podID="cfde5532-97c7-47b8-8b63-0159fc9e82b9" containerID="0e79cead43e145ebc00cae1a79b5c2bfc4c85f66748229277e2c4e4c8ef7f651" exitCode=0 Feb 14 04:26:24 crc kubenswrapper[4867]: I0214 04:26:24.780446 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nzdwg" event={"ID":"cfde5532-97c7-47b8-8b63-0159fc9e82b9","Type":"ContainerDied","Data":"0e79cead43e145ebc00cae1a79b5c2bfc4c85f66748229277e2c4e4c8ef7f651"} Feb 14 04:26:24 crc kubenswrapper[4867]: I0214 04:26:24.783027 4867 generic.go:334] "Generic (PLEG): container finished" podID="87fbab35-1a29-4dcd-94fd-b8d663b73622" containerID="8b2d52f06eebee7118510c869b74986963358a5a824948f2fd114a350afa5c2e" exitCode=0 Feb 14 04:26:24 crc kubenswrapper[4867]: I0214 04:26:24.783086 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5crd9" event={"ID":"87fbab35-1a29-4dcd-94fd-b8d663b73622","Type":"ContainerDied","Data":"8b2d52f06eebee7118510c869b74986963358a5a824948f2fd114a350afa5c2e"} Feb 14 04:26:24 crc kubenswrapper[4867]: I0214 04:26:24.783134 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5crd9" event={"ID":"87fbab35-1a29-4dcd-94fd-b8d663b73622","Type":"ContainerStarted","Data":"de1d5c3a5756f6da67db095fba834d9ec14cf3329af840e674b920ee7c05505b"} Feb 14 04:26:25 crc kubenswrapper[4867]: I0214 04:26:25.794341 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nzdwg" event={"ID":"cfde5532-97c7-47b8-8b63-0159fc9e82b9","Type":"ContainerStarted","Data":"a607ea132c1aa0b9d6c68c3601ae04a26220cd55eee8e095594f2aace6ecac5a"} Feb 14 04:26:25 crc kubenswrapper[4867]: I0214 04:26:25.794859 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nzdwg" event={"ID":"cfde5532-97c7-47b8-8b63-0159fc9e82b9","Type":"ContainerStarted","Data":"7dc066e000b0f0659e3da8817568bd5537335c5736f2f7be29d33d5f49e508de"} Feb 14 04:26:25 crc kubenswrapper[4867]: I0214 04:26:25.796016 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5crd9" event={"ID":"87fbab35-1a29-4dcd-94fd-b8d663b73622","Type":"ContainerStarted","Data":"a1f4caaea9c54471dd9119c2245d0b2f434696526f81d5bbf79e28b36d5b28cb"} Feb 14 04:26:26 crc kubenswrapper[4867]: I0214 04:26:26.816267 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nzdwg" event={"ID":"cfde5532-97c7-47b8-8b63-0159fc9e82b9","Type":"ContainerStarted","Data":"2b641826e1bdc0c9338a084886d7dddd2dae8caa45adbe0d79e15726e335705c"} Feb 14 04:26:26 crc kubenswrapper[4867]: I0214 04:26:26.817345 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nzdwg" event={"ID":"cfde5532-97c7-47b8-8b63-0159fc9e82b9","Type":"ContainerStarted","Data":"5b950ed8d59a06a71544ad0e918e0512757c07d75b22164cb8ef06d82b857118"} Feb 14 04:26:26 crc kubenswrapper[4867]: I0214 04:26:26.817426 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nzdwg" event={"ID":"cfde5532-97c7-47b8-8b63-0159fc9e82b9","Type":"ContainerStarted","Data":"8a31d984db28b5601904993e4b679f38e218cc59f491162ded1096bde8c0e281"} Feb 14 04:26:26 crc kubenswrapper[4867]: I0214 04:26:26.817487 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nzdwg" event={"ID":"cfde5532-97c7-47b8-8b63-0159fc9e82b9","Type":"ContainerStarted","Data":"8426c07a22007ae8cd6cc9210f95af45a35f7e53edfe6d5be65ad75c86067d42"} Feb 14 04:26:26 crc kubenswrapper[4867]: I0214 04:26:26.817690 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:26 crc kubenswrapper[4867]: I0214 04:26:26.819483 4867 generic.go:334] "Generic (PLEG): container finished" podID="87fbab35-1a29-4dcd-94fd-b8d663b73622" containerID="a1f4caaea9c54471dd9119c2245d0b2f434696526f81d5bbf79e28b36d5b28cb" exitCode=0 Feb 14 04:26:26 crc kubenswrapper[4867]: I0214 04:26:26.819555 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5crd9" event={"ID":"87fbab35-1a29-4dcd-94fd-b8d663b73622","Type":"ContainerDied","Data":"a1f4caaea9c54471dd9119c2245d0b2f434696526f81d5bbf79e28b36d5b28cb"} Feb 14 04:26:26 crc kubenswrapper[4867]: I0214 04:26:26.842263 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-nzdwg" podStartSLOduration=6.7129966549999995 podStartE2EDuration="15.842240055s" podCreationTimestamp="2026-02-14 04:26:11 +0000 UTC" firstStartedPulling="2026-02-14 04:26:12.54235287 +0000 UTC m=+1004.623290184" lastFinishedPulling="2026-02-14 04:26:21.67159626 +0000 UTC m=+1013.752533584" observedRunningTime="2026-02-14 04:26:26.841217988 +0000 UTC m=+1018.922155322" watchObservedRunningTime="2026-02-14 04:26:26.842240055 +0000 UTC m=+1018.923177369" Feb 14 04:26:27 crc kubenswrapper[4867]: I0214 04:26:27.414933 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:27 crc kubenswrapper[4867]: I0214 04:26:27.488840 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:27 crc kubenswrapper[4867]: I0214 04:26:27.831282 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5crd9" event={"ID":"87fbab35-1a29-4dcd-94fd-b8d663b73622","Type":"ContainerStarted","Data":"95ef0fea88c456826ef1c8f90e3fcd90f92474f8009712d30ff98125d3441f87"} Feb 14 04:26:27 crc kubenswrapper[4867]: I0214 04:26:27.875870 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5crd9" podStartSLOduration=3.445740594 podStartE2EDuration="5.875846263s" podCreationTimestamp="2026-02-14 04:26:22 +0000 UTC" firstStartedPulling="2026-02-14 04:26:24.784381932 +0000 UTC m=+1016.865319246" lastFinishedPulling="2026-02-14 04:26:27.214487601 +0000 UTC m=+1019.295424915" observedRunningTime="2026-02-14 04:26:27.87111767 +0000 UTC m=+1019.952054994" watchObservedRunningTime="2026-02-14 04:26:27.875846263 +0000 UTC m=+1019.956783587" Feb 14 04:26:30 crc kubenswrapper[4867]: I0214 04:26:30.657251 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-29mb7"] Feb 14 04:26:30 crc kubenswrapper[4867]: I0214 04:26:30.659529 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-29mb7" Feb 14 04:26:30 crc kubenswrapper[4867]: I0214 04:26:30.662200 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 14 04:26:30 crc kubenswrapper[4867]: I0214 04:26:30.663009 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 14 04:26:30 crc kubenswrapper[4867]: I0214 04:26:30.663200 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-rmhl7" Feb 14 04:26:30 crc kubenswrapper[4867]: I0214 04:26:30.667554 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-29mb7"] Feb 14 04:26:30 crc kubenswrapper[4867]: I0214 04:26:30.805090 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbssn\" (UniqueName: \"kubernetes.io/projected/b4bb205c-0469-49a0-b783-9b51ae11ddfe-kube-api-access-zbssn\") pod \"openstack-operator-index-29mb7\" (UID: \"b4bb205c-0469-49a0-b783-9b51ae11ddfe\") " pod="openstack-operators/openstack-operator-index-29mb7" Feb 14 04:26:30 crc kubenswrapper[4867]: I0214 04:26:30.907411 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbssn\" (UniqueName: \"kubernetes.io/projected/b4bb205c-0469-49a0-b783-9b51ae11ddfe-kube-api-access-zbssn\") pod \"openstack-operator-index-29mb7\" (UID: \"b4bb205c-0469-49a0-b783-9b51ae11ddfe\") " pod="openstack-operators/openstack-operator-index-29mb7" Feb 14 04:26:30 crc kubenswrapper[4867]: I0214 04:26:30.928360 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbssn\" (UniqueName: \"kubernetes.io/projected/b4bb205c-0469-49a0-b783-9b51ae11ddfe-kube-api-access-zbssn\") pod \"openstack-operator-index-29mb7\" (UID: \"b4bb205c-0469-49a0-b783-9b51ae11ddfe\") " pod="openstack-operators/openstack-operator-index-29mb7" Feb 14 04:26:30 crc kubenswrapper[4867]: I0214 04:26:30.988218 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-29mb7" Feb 14 04:26:31 crc kubenswrapper[4867]: I0214 04:26:31.522379 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-29mb7"] Feb 14 04:26:32 crc kubenswrapper[4867]: I0214 04:26:31.870784 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-29mb7" event={"ID":"b4bb205c-0469-49a0-b783-9b51ae11ddfe","Type":"ContainerStarted","Data":"28a0d17bc4f973949e58e3192827620aa395a4178d30694415f9c18c7463ced4"} Feb 14 04:26:32 crc kubenswrapper[4867]: I0214 04:26:32.227326 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" Feb 14 04:26:32 crc kubenswrapper[4867]: I0214 04:26:32.588683 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-zhmxc" Feb 14 04:26:33 crc kubenswrapper[4867]: I0214 04:26:33.286746 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:33 crc kubenswrapper[4867]: I0214 04:26:33.287066 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:33 crc kubenswrapper[4867]: I0214 04:26:33.353955 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:33 crc kubenswrapper[4867]: I0214 04:26:33.960428 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:35 crc kubenswrapper[4867]: I0214 04:26:35.911573 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-29mb7" event={"ID":"b4bb205c-0469-49a0-b783-9b51ae11ddfe","Type":"ContainerStarted","Data":"56b5a70b5aa1a66aaa851499b6c31a6255ba3615b98722b19c9dce1fa934e34b"} Feb 14 04:26:35 crc kubenswrapper[4867]: I0214 04:26:35.934628 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-29mb7" podStartSLOduration=2.36591447 podStartE2EDuration="5.934602721s" podCreationTimestamp="2026-02-14 04:26:30 +0000 UTC" firstStartedPulling="2026-02-14 04:26:31.530867316 +0000 UTC m=+1023.611804630" lastFinishedPulling="2026-02-14 04:26:35.099555547 +0000 UTC m=+1027.180492881" observedRunningTime="2026-02-14 04:26:35.932220049 +0000 UTC m=+1028.013157403" watchObservedRunningTime="2026-02-14 04:26:35.934602721 +0000 UTC m=+1028.015540035" Feb 14 04:26:37 crc kubenswrapper[4867]: I0214 04:26:37.845453 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5crd9"] Feb 14 04:26:37 crc kubenswrapper[4867]: I0214 04:26:37.846451 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5crd9" podUID="87fbab35-1a29-4dcd-94fd-b8d663b73622" containerName="registry-server" containerID="cri-o://95ef0fea88c456826ef1c8f90e3fcd90f92474f8009712d30ff98125d3441f87" gracePeriod=2 Feb 14 04:26:38 crc kubenswrapper[4867]: I0214 04:26:38.791742 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:38 crc kubenswrapper[4867]: I0214 04:26:38.934679 4867 generic.go:334] "Generic (PLEG): container finished" podID="87fbab35-1a29-4dcd-94fd-b8d663b73622" containerID="95ef0fea88c456826ef1c8f90e3fcd90f92474f8009712d30ff98125d3441f87" exitCode=0 Feb 14 04:26:38 crc kubenswrapper[4867]: I0214 04:26:38.934741 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5crd9" Feb 14 04:26:38 crc kubenswrapper[4867]: I0214 04:26:38.934731 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5crd9" event={"ID":"87fbab35-1a29-4dcd-94fd-b8d663b73622","Type":"ContainerDied","Data":"95ef0fea88c456826ef1c8f90e3fcd90f92474f8009712d30ff98125d3441f87"} Feb 14 04:26:38 crc kubenswrapper[4867]: I0214 04:26:38.934807 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5crd9" event={"ID":"87fbab35-1a29-4dcd-94fd-b8d663b73622","Type":"ContainerDied","Data":"de1d5c3a5756f6da67db095fba834d9ec14cf3329af840e674b920ee7c05505b"} Feb 14 04:26:38 crc kubenswrapper[4867]: I0214 04:26:38.934833 4867 scope.go:117] "RemoveContainer" containerID="95ef0fea88c456826ef1c8f90e3fcd90f92474f8009712d30ff98125d3441f87" Feb 14 04:26:38 crc kubenswrapper[4867]: I0214 04:26:38.950308 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67bfl\" (UniqueName: \"kubernetes.io/projected/87fbab35-1a29-4dcd-94fd-b8d663b73622-kube-api-access-67bfl\") pod \"87fbab35-1a29-4dcd-94fd-b8d663b73622\" (UID: \"87fbab35-1a29-4dcd-94fd-b8d663b73622\") " Feb 14 04:26:38 crc kubenswrapper[4867]: I0214 04:26:38.950355 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87fbab35-1a29-4dcd-94fd-b8d663b73622-catalog-content\") pod \"87fbab35-1a29-4dcd-94fd-b8d663b73622\" (UID: \"87fbab35-1a29-4dcd-94fd-b8d663b73622\") " Feb 14 04:26:38 crc kubenswrapper[4867]: I0214 04:26:38.950393 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87fbab35-1a29-4dcd-94fd-b8d663b73622-utilities\") pod \"87fbab35-1a29-4dcd-94fd-b8d663b73622\" (UID: \"87fbab35-1a29-4dcd-94fd-b8d663b73622\") " Feb 14 04:26:38 crc kubenswrapper[4867]: I0214 04:26:38.951247 4867 scope.go:117] "RemoveContainer" containerID="a1f4caaea9c54471dd9119c2245d0b2f434696526f81d5bbf79e28b36d5b28cb" Feb 14 04:26:38 crc kubenswrapper[4867]: I0214 04:26:38.951453 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87fbab35-1a29-4dcd-94fd-b8d663b73622-utilities" (OuterVolumeSpecName: "utilities") pod "87fbab35-1a29-4dcd-94fd-b8d663b73622" (UID: "87fbab35-1a29-4dcd-94fd-b8d663b73622"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:26:38 crc kubenswrapper[4867]: I0214 04:26:38.959404 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87fbab35-1a29-4dcd-94fd-b8d663b73622-kube-api-access-67bfl" (OuterVolumeSpecName: "kube-api-access-67bfl") pod "87fbab35-1a29-4dcd-94fd-b8d663b73622" (UID: "87fbab35-1a29-4dcd-94fd-b8d663b73622"). InnerVolumeSpecName "kube-api-access-67bfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:26:38 crc kubenswrapper[4867]: I0214 04:26:38.966485 4867 scope.go:117] "RemoveContainer" containerID="8b2d52f06eebee7118510c869b74986963358a5a824948f2fd114a350afa5c2e" Feb 14 04:26:39 crc kubenswrapper[4867]: I0214 04:26:39.004785 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87fbab35-1a29-4dcd-94fd-b8d663b73622-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87fbab35-1a29-4dcd-94fd-b8d663b73622" (UID: "87fbab35-1a29-4dcd-94fd-b8d663b73622"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:26:39 crc kubenswrapper[4867]: I0214 04:26:39.025425 4867 scope.go:117] "RemoveContainer" containerID="95ef0fea88c456826ef1c8f90e3fcd90f92474f8009712d30ff98125d3441f87" Feb 14 04:26:39 crc kubenswrapper[4867]: E0214 04:26:39.026064 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95ef0fea88c456826ef1c8f90e3fcd90f92474f8009712d30ff98125d3441f87\": container with ID starting with 95ef0fea88c456826ef1c8f90e3fcd90f92474f8009712d30ff98125d3441f87 not found: ID does not exist" containerID="95ef0fea88c456826ef1c8f90e3fcd90f92474f8009712d30ff98125d3441f87" Feb 14 04:26:39 crc kubenswrapper[4867]: I0214 04:26:39.026106 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95ef0fea88c456826ef1c8f90e3fcd90f92474f8009712d30ff98125d3441f87"} err="failed to get container status \"95ef0fea88c456826ef1c8f90e3fcd90f92474f8009712d30ff98125d3441f87\": rpc error: code = NotFound desc = could not find container \"95ef0fea88c456826ef1c8f90e3fcd90f92474f8009712d30ff98125d3441f87\": container with ID starting with 95ef0fea88c456826ef1c8f90e3fcd90f92474f8009712d30ff98125d3441f87 not found: ID does not exist" Feb 14 04:26:39 crc kubenswrapper[4867]: I0214 04:26:39.026136 4867 scope.go:117] "RemoveContainer" containerID="a1f4caaea9c54471dd9119c2245d0b2f434696526f81d5bbf79e28b36d5b28cb" Feb 14 04:26:39 crc kubenswrapper[4867]: E0214 04:26:39.026547 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1f4caaea9c54471dd9119c2245d0b2f434696526f81d5bbf79e28b36d5b28cb\": container with ID starting with a1f4caaea9c54471dd9119c2245d0b2f434696526f81d5bbf79e28b36d5b28cb not found: ID does not exist" containerID="a1f4caaea9c54471dd9119c2245d0b2f434696526f81d5bbf79e28b36d5b28cb" Feb 14 04:26:39 crc kubenswrapper[4867]: I0214 04:26:39.026575 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1f4caaea9c54471dd9119c2245d0b2f434696526f81d5bbf79e28b36d5b28cb"} err="failed to get container status \"a1f4caaea9c54471dd9119c2245d0b2f434696526f81d5bbf79e28b36d5b28cb\": rpc error: code = NotFound desc = could not find container \"a1f4caaea9c54471dd9119c2245d0b2f434696526f81d5bbf79e28b36d5b28cb\": container with ID starting with a1f4caaea9c54471dd9119c2245d0b2f434696526f81d5bbf79e28b36d5b28cb not found: ID does not exist" Feb 14 04:26:39 crc kubenswrapper[4867]: I0214 04:26:39.026589 4867 scope.go:117] "RemoveContainer" containerID="8b2d52f06eebee7118510c869b74986963358a5a824948f2fd114a350afa5c2e" Feb 14 04:26:39 crc kubenswrapper[4867]: E0214 04:26:39.027228 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b2d52f06eebee7118510c869b74986963358a5a824948f2fd114a350afa5c2e\": container with ID starting with 8b2d52f06eebee7118510c869b74986963358a5a824948f2fd114a350afa5c2e not found: ID does not exist" containerID="8b2d52f06eebee7118510c869b74986963358a5a824948f2fd114a350afa5c2e" Feb 14 04:26:39 crc kubenswrapper[4867]: I0214 04:26:39.027256 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b2d52f06eebee7118510c869b74986963358a5a824948f2fd114a350afa5c2e"} err="failed to get container status \"8b2d52f06eebee7118510c869b74986963358a5a824948f2fd114a350afa5c2e\": rpc error: code = NotFound desc = could not find container \"8b2d52f06eebee7118510c869b74986963358a5a824948f2fd114a350afa5c2e\": container with ID starting with 8b2d52f06eebee7118510c869b74986963358a5a824948f2fd114a350afa5c2e not found: ID does not exist" Feb 14 04:26:39 crc kubenswrapper[4867]: I0214 04:26:39.052261 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67bfl\" (UniqueName: \"kubernetes.io/projected/87fbab35-1a29-4dcd-94fd-b8d663b73622-kube-api-access-67bfl\") on node \"crc\" DevicePath \"\"" Feb 14 04:26:39 crc kubenswrapper[4867]: I0214 04:26:39.052293 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87fbab35-1a29-4dcd-94fd-b8d663b73622-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:26:39 crc kubenswrapper[4867]: I0214 04:26:39.052303 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87fbab35-1a29-4dcd-94fd-b8d663b73622-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:26:39 crc kubenswrapper[4867]: I0214 04:26:39.253889 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5crd9"] Feb 14 04:26:39 crc kubenswrapper[4867]: I0214 04:26:39.259787 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5crd9"] Feb 14 04:26:40 crc kubenswrapper[4867]: I0214 04:26:40.988431 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-29mb7" Feb 14 04:26:40 crc kubenswrapper[4867]: I0214 04:26:40.988814 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-29mb7" Feb 14 04:26:41 crc kubenswrapper[4867]: I0214 04:26:41.015786 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87fbab35-1a29-4dcd-94fd-b8d663b73622" path="/var/lib/kubelet/pods/87fbab35-1a29-4dcd-94fd-b8d663b73622/volumes" Feb 14 04:26:41 crc kubenswrapper[4867]: I0214 04:26:41.026499 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-29mb7" Feb 14 04:26:41 crc kubenswrapper[4867]: I0214 04:26:41.986715 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-29mb7" Feb 14 04:26:42 crc kubenswrapper[4867]: I0214 04:26:42.420008 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-nzdwg" Feb 14 04:26:43 crc kubenswrapper[4867]: I0214 04:26:43.887415 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7"] Feb 14 04:26:43 crc kubenswrapper[4867]: E0214 04:26:43.888057 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87fbab35-1a29-4dcd-94fd-b8d663b73622" containerName="extract-utilities" Feb 14 04:26:43 crc kubenswrapper[4867]: I0214 04:26:43.888070 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="87fbab35-1a29-4dcd-94fd-b8d663b73622" containerName="extract-utilities" Feb 14 04:26:43 crc kubenswrapper[4867]: E0214 04:26:43.888103 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87fbab35-1a29-4dcd-94fd-b8d663b73622" containerName="extract-content" Feb 14 04:26:43 crc kubenswrapper[4867]: I0214 04:26:43.888112 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="87fbab35-1a29-4dcd-94fd-b8d663b73622" containerName="extract-content" Feb 14 04:26:43 crc kubenswrapper[4867]: E0214 04:26:43.888120 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87fbab35-1a29-4dcd-94fd-b8d663b73622" containerName="registry-server" Feb 14 04:26:43 crc kubenswrapper[4867]: I0214 04:26:43.888125 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="87fbab35-1a29-4dcd-94fd-b8d663b73622" containerName="registry-server" Feb 14 04:26:43 crc kubenswrapper[4867]: I0214 04:26:43.888273 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="87fbab35-1a29-4dcd-94fd-b8d663b73622" containerName="registry-server" Feb 14 04:26:43 crc kubenswrapper[4867]: I0214 04:26:43.889439 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" Feb 14 04:26:43 crc kubenswrapper[4867]: I0214 04:26:43.891532 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-n8htd" Feb 14 04:26:43 crc kubenswrapper[4867]: I0214 04:26:43.901564 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7"] Feb 14 04:26:44 crc kubenswrapper[4867]: I0214 04:26:44.036878 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-bundle\") pod \"8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7\" (UID: \"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb\") " pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" Feb 14 04:26:44 crc kubenswrapper[4867]: I0214 04:26:44.036983 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86bzh\" (UniqueName: \"kubernetes.io/projected/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-kube-api-access-86bzh\") pod \"8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7\" (UID: \"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb\") " pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" Feb 14 04:26:44 crc kubenswrapper[4867]: I0214 04:26:44.037028 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-util\") pod \"8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7\" (UID: \"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb\") " pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" Feb 14 04:26:44 crc kubenswrapper[4867]: I0214 04:26:44.138758 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-bundle\") pod \"8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7\" (UID: \"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb\") " pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" Feb 14 04:26:44 crc kubenswrapper[4867]: I0214 04:26:44.138856 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86bzh\" (UniqueName: \"kubernetes.io/projected/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-kube-api-access-86bzh\") pod \"8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7\" (UID: \"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb\") " pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" Feb 14 04:26:44 crc kubenswrapper[4867]: I0214 04:26:44.138912 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-util\") pod \"8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7\" (UID: \"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb\") " pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" Feb 14 04:26:44 crc kubenswrapper[4867]: I0214 04:26:44.139659 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-bundle\") pod \"8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7\" (UID: \"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb\") " pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" Feb 14 04:26:44 crc kubenswrapper[4867]: I0214 04:26:44.139698 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-util\") pod \"8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7\" (UID: \"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb\") " pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" Feb 14 04:26:44 crc kubenswrapper[4867]: I0214 04:26:44.157403 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86bzh\" (UniqueName: \"kubernetes.io/projected/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-kube-api-access-86bzh\") pod \"8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7\" (UID: \"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb\") " pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" Feb 14 04:26:44 crc kubenswrapper[4867]: I0214 04:26:44.215833 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" Feb 14 04:26:44 crc kubenswrapper[4867]: I0214 04:26:44.683619 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7"] Feb 14 04:26:44 crc kubenswrapper[4867]: I0214 04:26:44.979729 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" event={"ID":"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb","Type":"ContainerStarted","Data":"26692d66fac6046678dcec0f0061f631b2eeddc2732b9a77066139ad9b186ab7"} Feb 14 04:26:44 crc kubenswrapper[4867]: I0214 04:26:44.980073 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" event={"ID":"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb","Type":"ContainerStarted","Data":"c384d75f4d61a30f6f036a03975d5148dcaeb9cdbd96528b48b66f421343518a"} Feb 14 04:26:45 crc kubenswrapper[4867]: I0214 04:26:45.991467 4867 generic.go:334] "Generic (PLEG): container finished" podID="fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb" containerID="26692d66fac6046678dcec0f0061f631b2eeddc2732b9a77066139ad9b186ab7" exitCode=0 Feb 14 04:26:45 crc kubenswrapper[4867]: I0214 04:26:45.991531 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" event={"ID":"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb","Type":"ContainerDied","Data":"26692d66fac6046678dcec0f0061f631b2eeddc2732b9a77066139ad9b186ab7"} Feb 14 04:26:47 crc kubenswrapper[4867]: I0214 04:26:47.003085 4867 generic.go:334] "Generic (PLEG): container finished" podID="fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb" containerID="f28e180ce4df271ab60e8101d1a4a5a090a6e9f14af22216d08fa32a9c9cfce1" exitCode=0 Feb 14 04:26:47 crc kubenswrapper[4867]: I0214 04:26:47.014293 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" event={"ID":"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb","Type":"ContainerDied","Data":"f28e180ce4df271ab60e8101d1a4a5a090a6e9f14af22216d08fa32a9c9cfce1"} Feb 14 04:26:48 crc kubenswrapper[4867]: I0214 04:26:48.021992 4867 generic.go:334] "Generic (PLEG): container finished" podID="fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb" containerID="159bc62f52e94cba661c0e5bd47942bf912ce2a37d3c6a9764d0abd2e62d919d" exitCode=0 Feb 14 04:26:48 crc kubenswrapper[4867]: I0214 04:26:48.022306 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" event={"ID":"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb","Type":"ContainerDied","Data":"159bc62f52e94cba661c0e5bd47942bf912ce2a37d3c6a9764d0abd2e62d919d"} Feb 14 04:26:49 crc kubenswrapper[4867]: I0214 04:26:49.383536 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" Feb 14 04:26:49 crc kubenswrapper[4867]: I0214 04:26:49.539810 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-bundle\") pod \"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb\" (UID: \"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb\") " Feb 14 04:26:49 crc kubenswrapper[4867]: I0214 04:26:49.539881 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-util\") pod \"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb\" (UID: \"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb\") " Feb 14 04:26:49 crc kubenswrapper[4867]: I0214 04:26:49.539998 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86bzh\" (UniqueName: \"kubernetes.io/projected/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-kube-api-access-86bzh\") pod \"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb\" (UID: \"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb\") " Feb 14 04:26:49 crc kubenswrapper[4867]: I0214 04:26:49.540776 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-bundle" (OuterVolumeSpecName: "bundle") pod "fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb" (UID: "fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:26:49 crc kubenswrapper[4867]: I0214 04:26:49.548827 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-kube-api-access-86bzh" (OuterVolumeSpecName: "kube-api-access-86bzh") pod "fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb" (UID: "fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb"). InnerVolumeSpecName "kube-api-access-86bzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:26:49 crc kubenswrapper[4867]: I0214 04:26:49.555357 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-util" (OuterVolumeSpecName: "util") pod "fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb" (UID: "fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:26:49 crc kubenswrapper[4867]: I0214 04:26:49.643009 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86bzh\" (UniqueName: \"kubernetes.io/projected/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-kube-api-access-86bzh\") on node \"crc\" DevicePath \"\"" Feb 14 04:26:49 crc kubenswrapper[4867]: I0214 04:26:49.643051 4867 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:26:49 crc kubenswrapper[4867]: I0214 04:26:49.643060 4867 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb-util\") on node \"crc\" DevicePath \"\"" Feb 14 04:26:50 crc kubenswrapper[4867]: I0214 04:26:50.039924 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" event={"ID":"fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb","Type":"ContainerDied","Data":"c384d75f4d61a30f6f036a03975d5148dcaeb9cdbd96528b48b66f421343518a"} Feb 14 04:26:50 crc kubenswrapper[4867]: I0214 04:26:50.040266 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c384d75f4d61a30f6f036a03975d5148dcaeb9cdbd96528b48b66f421343518a" Feb 14 04:26:50 crc kubenswrapper[4867]: I0214 04:26:50.039974 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7" Feb 14 04:26:53 crc kubenswrapper[4867]: I0214 04:26:53.042406 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8"] Feb 14 04:26:53 crc kubenswrapper[4867]: E0214 04:26:53.044211 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb" containerName="extract" Feb 14 04:26:53 crc kubenswrapper[4867]: I0214 04:26:53.044294 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb" containerName="extract" Feb 14 04:26:53 crc kubenswrapper[4867]: E0214 04:26:53.044366 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb" containerName="util" Feb 14 04:26:53 crc kubenswrapper[4867]: I0214 04:26:53.044425 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb" containerName="util" Feb 14 04:26:53 crc kubenswrapper[4867]: E0214 04:26:53.044485 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb" containerName="pull" Feb 14 04:26:53 crc kubenswrapper[4867]: I0214 04:26:53.044566 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb" containerName="pull" Feb 14 04:26:53 crc kubenswrapper[4867]: I0214 04:26:53.044765 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb" containerName="extract" Feb 14 04:26:53 crc kubenswrapper[4867]: I0214 04:26:53.045369 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8" Feb 14 04:26:53 crc kubenswrapper[4867]: I0214 04:26:53.047341 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-tvplp" Feb 14 04:26:53 crc kubenswrapper[4867]: I0214 04:26:53.073371 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8"] Feb 14 04:26:53 crc kubenswrapper[4867]: I0214 04:26:53.206786 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q46f\" (UniqueName: \"kubernetes.io/projected/10461723-ecff-48fe-a034-9a07bf3bf8f7-kube-api-access-5q46f\") pod \"openstack-operator-controller-init-6b9546c8f4-49lm8\" (UID: \"10461723-ecff-48fe-a034-9a07bf3bf8f7\") " pod="openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8" Feb 14 04:26:53 crc kubenswrapper[4867]: I0214 04:26:53.307965 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q46f\" (UniqueName: \"kubernetes.io/projected/10461723-ecff-48fe-a034-9a07bf3bf8f7-kube-api-access-5q46f\") pod \"openstack-operator-controller-init-6b9546c8f4-49lm8\" (UID: \"10461723-ecff-48fe-a034-9a07bf3bf8f7\") " pod="openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8" Feb 14 04:26:53 crc kubenswrapper[4867]: I0214 04:26:53.326292 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q46f\" (UniqueName: \"kubernetes.io/projected/10461723-ecff-48fe-a034-9a07bf3bf8f7-kube-api-access-5q46f\") pod \"openstack-operator-controller-init-6b9546c8f4-49lm8\" (UID: \"10461723-ecff-48fe-a034-9a07bf3bf8f7\") " pod="openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8" Feb 14 04:26:53 crc kubenswrapper[4867]: I0214 04:26:53.365364 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8" Feb 14 04:26:53 crc kubenswrapper[4867]: I0214 04:26:53.834177 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8"] Feb 14 04:26:54 crc kubenswrapper[4867]: I0214 04:26:54.080383 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8" event={"ID":"10461723-ecff-48fe-a034-9a07bf3bf8f7","Type":"ContainerStarted","Data":"da48c96176f86ed873e7ef026b1e135894bc0628dcc59baf8b819923a1ba2408"} Feb 14 04:26:59 crc kubenswrapper[4867]: I0214 04:26:59.130157 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8" event={"ID":"10461723-ecff-48fe-a034-9a07bf3bf8f7","Type":"ContainerStarted","Data":"b501166086dcf813d43fbe01f66927fcbef4f7716cc8b6badc80e7113b808be2"} Feb 14 04:26:59 crc kubenswrapper[4867]: I0214 04:26:59.130807 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8" Feb 14 04:26:59 crc kubenswrapper[4867]: I0214 04:26:59.162228 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8" podStartSLOduration=1.117456254 podStartE2EDuration="6.162207793s" podCreationTimestamp="2026-02-14 04:26:53 +0000 UTC" firstStartedPulling="2026-02-14 04:26:53.848585481 +0000 UTC m=+1045.929522795" lastFinishedPulling="2026-02-14 04:26:58.89333702 +0000 UTC m=+1050.974274334" observedRunningTime="2026-02-14 04:26:59.154224617 +0000 UTC m=+1051.235161941" watchObservedRunningTime="2026-02-14 04:26:59.162207793 +0000 UTC m=+1051.243145127" Feb 14 04:27:13 crc kubenswrapper[4867]: I0214 04:27:13.370035 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.160044 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.161572 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.164101 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-q4xdx" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.173087 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.174215 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.175059 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8t7h\" (UniqueName: \"kubernetes.io/projected/66c8a0dd-f076-4994-bd42-39c80de83233-kube-api-access-w8t7h\") pod \"barbican-operator-controller-manager-868647ff47-pxm8d\" (UID: \"66c8a0dd-f076-4994-bd42-39c80de83233\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.179691 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-scmhd" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.183492 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.199251 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.200441 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.207923 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-kggl2" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.220743 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.221890 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.224003 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-hv82j" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.229843 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.235264 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.276133 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.277112 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt5nx\" (UniqueName: \"kubernetes.io/projected/3025ff58-4a91-43f5-8f15-94cadd0cef8b-kube-api-access-jt5nx\") pod \"cinder-operator-controller-manager-5d946d989d-chbgl\" (UID: \"3025ff58-4a91-43f5-8f15-94cadd0cef8b\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.277218 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8t7h\" (UniqueName: \"kubernetes.io/projected/66c8a0dd-f076-4994-bd42-39c80de83233-kube-api-access-w8t7h\") pod \"barbican-operator-controller-manager-868647ff47-pxm8d\" (UID: \"66c8a0dd-f076-4994-bd42-39c80de83233\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.277257 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gntpk\" (UniqueName: \"kubernetes.io/projected/1f889f7b-8ae5-43e3-ab54-d3bf06c010df-kube-api-access-gntpk\") pod \"glance-operator-controller-manager-77987464f4-tpfxn\" (UID: \"1f889f7b-8ae5-43e3-ab54-d3bf06c010df\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.277292 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trhdw\" (UniqueName: \"kubernetes.io/projected/652d3b74-0634-4f8f-b5ef-3adfc53920eb-kube-api-access-trhdw\") pod \"designate-operator-controller-manager-6d8bf5c495-ndb8l\" (UID: \"652d3b74-0634-4f8f-b5ef-3adfc53920eb\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.305878 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.305912 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8t7h\" (UniqueName: \"kubernetes.io/projected/66c8a0dd-f076-4994-bd42-39c80de83233-kube-api-access-w8t7h\") pod \"barbican-operator-controller-manager-868647ff47-pxm8d\" (UID: \"66c8a0dd-f076-4994-bd42-39c80de83233\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.306876 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.310897 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-g9dmc" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.311725 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bgznq"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.312724 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bgznq" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.314186 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-mw859" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.353590 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.362269 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bgznq"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.381054 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt5nx\" (UniqueName: \"kubernetes.io/projected/3025ff58-4a91-43f5-8f15-94cadd0cef8b-kube-api-access-jt5nx\") pod \"cinder-operator-controller-manager-5d946d989d-chbgl\" (UID: \"3025ff58-4a91-43f5-8f15-94cadd0cef8b\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.381151 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gntpk\" (UniqueName: \"kubernetes.io/projected/1f889f7b-8ae5-43e3-ab54-d3bf06c010df-kube-api-access-gntpk\") pod \"glance-operator-controller-manager-77987464f4-tpfxn\" (UID: \"1f889f7b-8ae5-43e3-ab54-d3bf06c010df\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.381179 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trhdw\" (UniqueName: \"kubernetes.io/projected/652d3b74-0634-4f8f-b5ef-3adfc53920eb-kube-api-access-trhdw\") pod \"designate-operator-controller-manager-6d8bf5c495-ndb8l\" (UID: \"652d3b74-0634-4f8f-b5ef-3adfc53920eb\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.381208 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnw7g\" (UniqueName: \"kubernetes.io/projected/185d4fd5-608b-48d8-8731-27e7a05adfe2-kube-api-access-vnw7g\") pod \"heat-operator-controller-manager-69f49c598c-jxpv2\" (UID: \"185d4fd5-608b-48d8-8731-27e7a05adfe2\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.381231 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgnfc\" (UniqueName: \"kubernetes.io/projected/4b75df5b-04e5-445f-8d2d-57c6cbe5971c-kube-api-access-cgnfc\") pod \"horizon-operator-controller-manager-5b9b8895d5-bgznq\" (UID: \"4b75df5b-04e5-445f-8d2d-57c6cbe5971c\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bgznq" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.395144 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.396514 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.415546 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.415816 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-9d88p" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.423138 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.425004 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gntpk\" (UniqueName: \"kubernetes.io/projected/1f889f7b-8ae5-43e3-ab54-d3bf06c010df-kube-api-access-gntpk\") pod \"glance-operator-controller-manager-77987464f4-tpfxn\" (UID: \"1f889f7b-8ae5-43e3-ab54-d3bf06c010df\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.434376 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trhdw\" (UniqueName: \"kubernetes.io/projected/652d3b74-0634-4f8f-b5ef-3adfc53920eb-kube-api-access-trhdw\") pod \"designate-operator-controller-manager-6d8bf5c495-ndb8l\" (UID: \"652d3b74-0634-4f8f-b5ef-3adfc53920eb\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.453957 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.455484 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.461020 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt5nx\" (UniqueName: \"kubernetes.io/projected/3025ff58-4a91-43f5-8f15-94cadd0cef8b-kube-api-access-jt5nx\") pod \"cinder-operator-controller-manager-5d946d989d-chbgl\" (UID: \"3025ff58-4a91-43f5-8f15-94cadd0cef8b\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.468678 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-hh6sv" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.470555 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.482664 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert\") pod \"infra-operator-controller-manager-79d975b745-jqq2w\" (UID: \"ebee5651-7233-4c18-bb97-a4dc91eabef4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.483075 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnw7g\" (UniqueName: \"kubernetes.io/projected/185d4fd5-608b-48d8-8731-27e7a05adfe2-kube-api-access-vnw7g\") pod \"heat-operator-controller-manager-69f49c598c-jxpv2\" (UID: \"185d4fd5-608b-48d8-8731-27e7a05adfe2\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.483108 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llwh7\" (UniqueName: \"kubernetes.io/projected/ebee5651-7233-4c18-bb97-a4dc91eabef4-kube-api-access-llwh7\") pod \"infra-operator-controller-manager-79d975b745-jqq2w\" (UID: \"ebee5651-7233-4c18-bb97-a4dc91eabef4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.483148 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgnfc\" (UniqueName: \"kubernetes.io/projected/4b75df5b-04e5-445f-8d2d-57c6cbe5971c-kube-api-access-cgnfc\") pod \"horizon-operator-controller-manager-5b9b8895d5-bgznq\" (UID: \"4b75df5b-04e5-445f-8d2d-57c6cbe5971c\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bgznq" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.483252 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7pjj\" (UniqueName: \"kubernetes.io/projected/dc65ca0c-1d72-468f-b600-dfb8332bf4bd-kube-api-access-s7pjj\") pod \"keystone-operator-controller-manager-b4d948c87-x7qx5\" (UID: \"dc65ca0c-1d72-468f-b600-dfb8332bf4bd\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.482980 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.484141 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.493354 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-2pspc" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.505719 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.527932 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.536584 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.542295 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgnfc\" (UniqueName: \"kubernetes.io/projected/4b75df5b-04e5-445f-8d2d-57c6cbe5971c-kube-api-access-cgnfc\") pod \"horizon-operator-controller-manager-5b9b8895d5-bgznq\" (UID: \"4b75df5b-04e5-445f-8d2d-57c6cbe5971c\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bgznq" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.552789 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.583075 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnw7g\" (UniqueName: \"kubernetes.io/projected/185d4fd5-608b-48d8-8731-27e7a05adfe2-kube-api-access-vnw7g\") pod \"heat-operator-controller-manager-69f49c598c-jxpv2\" (UID: \"185d4fd5-608b-48d8-8731-27e7a05adfe2\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.584348 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llwh7\" (UniqueName: \"kubernetes.io/projected/ebee5651-7233-4c18-bb97-a4dc91eabef4-kube-api-access-llwh7\") pod \"infra-operator-controller-manager-79d975b745-jqq2w\" (UID: \"ebee5651-7233-4c18-bb97-a4dc91eabef4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.584446 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7pjj\" (UniqueName: \"kubernetes.io/projected/dc65ca0c-1d72-468f-b600-dfb8332bf4bd-kube-api-access-s7pjj\") pod \"keystone-operator-controller-manager-b4d948c87-x7qx5\" (UID: \"dc65ca0c-1d72-468f-b600-dfb8332bf4bd\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.584474 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpjl7\" (UniqueName: \"kubernetes.io/projected/94ff35ef-77e1-4085-ad2f-837ebc666b2a-kube-api-access-bpjl7\") pod \"ironic-operator-controller-manager-554564d7fc-6nhjp\" (UID: \"94ff35ef-77e1-4085-ad2f-837ebc666b2a\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.584544 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert\") pod \"infra-operator-controller-manager-79d975b745-jqq2w\" (UID: \"ebee5651-7233-4c18-bb97-a4dc91eabef4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 04:27:33 crc kubenswrapper[4867]: E0214 04:27:33.584661 4867 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 14 04:27:33 crc kubenswrapper[4867]: E0214 04:27:33.584710 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert podName:ebee5651-7233-4c18-bb97-a4dc91eabef4 nodeName:}" failed. No retries permitted until 2026-02-14 04:27:34.08469228 +0000 UTC m=+1086.165629584 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert") pod "infra-operator-controller-manager-79d975b745-jqq2w" (UID: "ebee5651-7233-4c18-bb97-a4dc91eabef4") : secret "infra-operator-webhook-server-cert" not found Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.601276 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.681005 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.684358 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.685337 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.690072 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpjl7\" (UniqueName: \"kubernetes.io/projected/94ff35ef-77e1-4085-ad2f-837ebc666b2a-kube-api-access-bpjl7\") pod \"ironic-operator-controller-manager-554564d7fc-6nhjp\" (UID: \"94ff35ef-77e1-4085-ad2f-837ebc666b2a\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.691711 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-kd49j" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.699816 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bgznq" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.710875 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llwh7\" (UniqueName: \"kubernetes.io/projected/ebee5651-7233-4c18-bb97-a4dc91eabef4-kube-api-access-llwh7\") pod \"infra-operator-controller-manager-79d975b745-jqq2w\" (UID: \"ebee5651-7233-4c18-bb97-a4dc91eabef4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.731221 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7pjj\" (UniqueName: \"kubernetes.io/projected/dc65ca0c-1d72-468f-b600-dfb8332bf4bd-kube-api-access-s7pjj\") pod \"keystone-operator-controller-manager-b4d948c87-x7qx5\" (UID: \"dc65ca0c-1d72-468f-b600-dfb8332bf4bd\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.740572 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.741775 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.753204 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.764172 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-srcqs" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.773852 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.791436 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd8nr\" (UniqueName: \"kubernetes.io/projected/6b5078d9-f30f-40a8-b5b5-8eb11271ec10-kube-api-access-nd8nr\") pod \"manila-operator-controller-manager-54f6768c69-8dzwp\" (UID: \"6b5078d9-f30f-40a8-b5b5-8eb11271ec10\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.791586 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjptq\" (UniqueName: \"kubernetes.io/projected/38a9cdf3-42e2-4279-8092-af7e8c82bc51-kube-api-access-kjptq\") pod \"neutron-operator-controller-manager-64ddbf8bb-2xwdd\" (UID: \"38a9cdf3-42e2-4279-8092-af7e8c82bc51\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.798269 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.799479 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.809883 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-lmbx6" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.810096 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpjl7\" (UniqueName: \"kubernetes.io/projected/94ff35ef-77e1-4085-ad2f-837ebc666b2a-kube-api-access-bpjl7\") pod \"ironic-operator-controller-manager-554564d7fc-6nhjp\" (UID: \"94ff35ef-77e1-4085-ad2f-837ebc666b2a\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.854879 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg"] Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.866206 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.872392 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-ff2jx" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.909125 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjptq\" (UniqueName: \"kubernetes.io/projected/38a9cdf3-42e2-4279-8092-af7e8c82bc51-kube-api-access-kjptq\") pod \"neutron-operator-controller-manager-64ddbf8bb-2xwdd\" (UID: \"38a9cdf3-42e2-4279-8092-af7e8c82bc51\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.909226 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd8nr\" (UniqueName: \"kubernetes.io/projected/6b5078d9-f30f-40a8-b5b5-8eb11271ec10-kube-api-access-nd8nr\") pod \"manila-operator-controller-manager-54f6768c69-8dzwp\" (UID: \"6b5078d9-f30f-40a8-b5b5-8eb11271ec10\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.909254 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btzhx\" (UniqueName: \"kubernetes.io/projected/7bb6de63-3c92-43de-a01b-b34df765aeba-kube-api-access-btzhx\") pod \"mariadb-operator-controller-manager-6994f66f48-wwm9m\" (UID: \"7bb6de63-3c92-43de-a01b-b34df765aeba\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.938132 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.949333 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjptq\" (UniqueName: \"kubernetes.io/projected/38a9cdf3-42e2-4279-8092-af7e8c82bc51-kube-api-access-kjptq\") pod \"neutron-operator-controller-manager-64ddbf8bb-2xwdd\" (UID: \"38a9cdf3-42e2-4279-8092-af7e8c82bc51\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.974683 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd8nr\" (UniqueName: \"kubernetes.io/projected/6b5078d9-f30f-40a8-b5b5-8eb11271ec10-kube-api-access-nd8nr\") pod \"manila-operator-controller-manager-54f6768c69-8dzwp\" (UID: \"6b5078d9-f30f-40a8-b5b5-8eb11271ec10\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp" Feb 14 04:27:33 crc kubenswrapper[4867]: I0214 04:27:33.974758 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.069832 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkk9v\" (UniqueName: \"kubernetes.io/projected/74a43e5b-11c4-459d-bbc7-03aa03489f17-kube-api-access-dkk9v\") pod \"nova-operator-controller-manager-567668f5cf-tf6rg\" (UID: \"74a43e5b-11c4-459d-bbc7-03aa03489f17\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.070168 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btzhx\" (UniqueName: \"kubernetes.io/projected/7bb6de63-3c92-43de-a01b-b34df765aeba-kube-api-access-btzhx\") pod \"mariadb-operator-controller-manager-6994f66f48-wwm9m\" (UID: \"7bb6de63-3c92-43de-a01b-b34df765aeba\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.082133 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.110707 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.111075 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.111969 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btzhx\" (UniqueName: \"kubernetes.io/projected/7bb6de63-3c92-43de-a01b-b34df765aeba-kube-api-access-btzhx\") pod \"mariadb-operator-controller-manager-6994f66f48-wwm9m\" (UID: \"7bb6de63-3c92-43de-a01b-b34df765aeba\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.154129 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.155334 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.161845 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-hnlct" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.168423 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.172735 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkk9v\" (UniqueName: \"kubernetes.io/projected/74a43e5b-11c4-459d-bbc7-03aa03489f17-kube-api-access-dkk9v\") pod \"nova-operator-controller-manager-567668f5cf-tf6rg\" (UID: \"74a43e5b-11c4-459d-bbc7-03aa03489f17\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.172953 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert\") pod \"infra-operator-controller-manager-79d975b745-jqq2w\" (UID: \"ebee5651-7233-4c18-bb97-a4dc91eabef4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 04:27:34 crc kubenswrapper[4867]: E0214 04:27:34.182281 4867 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 14 04:27:34 crc kubenswrapper[4867]: E0214 04:27:34.182344 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert podName:ebee5651-7233-4c18-bb97-a4dc91eabef4 nodeName:}" failed. No retries permitted until 2026-02-14 04:27:35.182326694 +0000 UTC m=+1087.263264008 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert") pod "infra-operator-controller-manager-79d975b745-jqq2w" (UID: "ebee5651-7233-4c18-bb97-a4dc91eabef4") : secret "infra-operator-webhook-server-cert" not found Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.206968 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.228577 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-dszdp"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.231092 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dszdp" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.250631 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-xh5pm" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.256655 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkk9v\" (UniqueName: \"kubernetes.io/projected/74a43e5b-11c4-459d-bbc7-03aa03489f17-kube-api-access-dkk9v\") pod \"nova-operator-controller-manager-567668f5cf-tf6rg\" (UID: \"74a43e5b-11c4-459d-bbc7-03aa03489f17\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.274048 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwncd\" (UniqueName: \"kubernetes.io/projected/64ff8480-2ca0-40d5-b5c9-448d0db3c575-kube-api-access-kwncd\") pod \"octavia-operator-controller-manager-69f8888797-7zkqz\" (UID: \"64ff8480-2ca0-40d5-b5c9-448d0db3c575\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.274100 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5cmr\" (UniqueName: \"kubernetes.io/projected/ffb00aaf-6760-440e-827a-f795baf3693a-kube-api-access-l5cmr\") pod \"ovn-operator-controller-manager-d44cf6b75-dszdp\" (UID: \"ffb00aaf-6760-440e-827a-f795baf3693a\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dszdp" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.309754 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.311395 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.333627 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.346613 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-97lkg" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.347054 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.349945 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-dszdp"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.372137 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.373652 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.375324 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5cmr\" (UniqueName: \"kubernetes.io/projected/ffb00aaf-6760-440e-827a-f795baf3693a-kube-api-access-l5cmr\") pod \"ovn-operator-controller-manager-d44cf6b75-dszdp\" (UID: \"ffb00aaf-6760-440e-827a-f795baf3693a\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dszdp" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.375419 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkz75\" (UniqueName: \"kubernetes.io/projected/634f9e2f-2100-49e3-a31f-a369cf8ff13f-kube-api-access-hkz75\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t\" (UID: \"634f9e2f-2100-49e3-a31f-a369cf8ff13f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.375444 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t\" (UID: \"634f9e2f-2100-49e3-a31f-a369cf8ff13f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.375497 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhjqt\" (UniqueName: \"kubernetes.io/projected/9ec66be5-3947-45d1-bf34-c7639e8d4c8a-kube-api-access-lhjqt\") pod \"placement-operator-controller-manager-8497b45c89-vwvtz\" (UID: \"9ec66be5-3947-45d1-bf34-c7639e8d4c8a\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.375541 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwncd\" (UniqueName: \"kubernetes.io/projected/64ff8480-2ca0-40d5-b5c9-448d0db3c575-kube-api-access-kwncd\") pod \"octavia-operator-controller-manager-69f8888797-7zkqz\" (UID: \"64ff8480-2ca0-40d5-b5c9-448d0db3c575\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.379124 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-mz82h" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.383247 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.390318 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.394572 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-snrw6"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.396190 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-snrw6" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.402005 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-x7jg4" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.403626 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.422324 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5cmr\" (UniqueName: \"kubernetes.io/projected/ffb00aaf-6760-440e-827a-f795baf3693a-kube-api-access-l5cmr\") pod \"ovn-operator-controller-manager-d44cf6b75-dszdp\" (UID: \"ffb00aaf-6760-440e-827a-f795baf3693a\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dszdp" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.429637 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-snrw6"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.436872 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.438482 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.442135 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-c6gsz" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.448057 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-t7hwz"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.450069 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-t7hwz" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.455436 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-v45zn" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.463742 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwncd\" (UniqueName: \"kubernetes.io/projected/64ff8480-2ca0-40d5-b5c9-448d0db3c575-kube-api-access-kwncd\") pod \"octavia-operator-controller-manager-69f8888797-7zkqz\" (UID: \"64ff8480-2ca0-40d5-b5c9-448d0db3c575\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.475793 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.479274 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t\" (UID: \"634f9e2f-2100-49e3-a31f-a369cf8ff13f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.479474 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crhw2\" (UniqueName: \"kubernetes.io/projected/67e3f2b9-2dbf-4c35-b1cd-02be51f58e38-kube-api-access-crhw2\") pod \"test-operator-controller-manager-7866795846-t7hwz\" (UID: \"67e3f2b9-2dbf-4c35-b1cd-02be51f58e38\") " pod="openstack-operators/test-operator-controller-manager-7866795846-t7hwz" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.480306 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhjqt\" (UniqueName: \"kubernetes.io/projected/9ec66be5-3947-45d1-bf34-c7639e8d4c8a-kube-api-access-lhjqt\") pod \"placement-operator-controller-manager-8497b45c89-vwvtz\" (UID: \"9ec66be5-3947-45d1-bf34-c7639e8d4c8a\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.480737 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgtlx\" (UniqueName: \"kubernetes.io/projected/d72a97fb-2a6a-4af1-8f0c-de88ab679119-kube-api-access-dgtlx\") pod \"telemetry-operator-controller-manager-55dcdcc8d-49t56\" (UID: \"d72a97fb-2a6a-4af1-8f0c-de88ab679119\") " pod="openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56" Feb 14 04:27:34 crc kubenswrapper[4867]: E0214 04:27:34.479871 4867 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 04:27:34 crc kubenswrapper[4867]: E0214 04:27:34.482796 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert podName:634f9e2f-2100-49e3-a31f-a369cf8ff13f nodeName:}" failed. No retries permitted until 2026-02-14 04:27:34.982724052 +0000 UTC m=+1087.063661556 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" (UID: "634f9e2f-2100-49e3-a31f-a369cf8ff13f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.488134 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dth6\" (UniqueName: \"kubernetes.io/projected/bc4bb4fd-bcc8-438b-af84-a2db3d3e346a-kube-api-access-7dth6\") pod \"swift-operator-controller-manager-68f46476f-snrw6\" (UID: \"bc4bb4fd-bcc8-438b-af84-a2db3d3e346a\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-snrw6" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.488809 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkz75\" (UniqueName: \"kubernetes.io/projected/634f9e2f-2100-49e3-a31f-a369cf8ff13f-kube-api-access-hkz75\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t\" (UID: \"634f9e2f-2100-49e3-a31f-a369cf8ff13f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.517658 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-t7hwz"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.535094 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.556939 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkz75\" (UniqueName: \"kubernetes.io/projected/634f9e2f-2100-49e3-a31f-a369cf8ff13f-kube-api-access-hkz75\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t\" (UID: \"634f9e2f-2100-49e3-a31f-a369cf8ff13f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.562963 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhjqt\" (UniqueName: \"kubernetes.io/projected/9ec66be5-3947-45d1-bf34-c7639e8d4c8a-kube-api-access-lhjqt\") pod \"placement-operator-controller-manager-8497b45c89-vwvtz\" (UID: \"9ec66be5-3947-45d1-bf34-c7639e8d4c8a\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.591578 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgtlx\" (UniqueName: \"kubernetes.io/projected/d72a97fb-2a6a-4af1-8f0c-de88ab679119-kube-api-access-dgtlx\") pod \"telemetry-operator-controller-manager-55dcdcc8d-49t56\" (UID: \"d72a97fb-2a6a-4af1-8f0c-de88ab679119\") " pod="openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.591696 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dth6\" (UniqueName: \"kubernetes.io/projected/bc4bb4fd-bcc8-438b-af84-a2db3d3e346a-kube-api-access-7dth6\") pod \"swift-operator-controller-manager-68f46476f-snrw6\" (UID: \"bc4bb4fd-bcc8-438b-af84-a2db3d3e346a\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-snrw6" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.591803 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crhw2\" (UniqueName: \"kubernetes.io/projected/67e3f2b9-2dbf-4c35-b1cd-02be51f58e38-kube-api-access-crhw2\") pod \"test-operator-controller-manager-7866795846-t7hwz\" (UID: \"67e3f2b9-2dbf-4c35-b1cd-02be51f58e38\") " pod="openstack-operators/test-operator-controller-manager-7866795846-t7hwz" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.633278 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crhw2\" (UniqueName: \"kubernetes.io/projected/67e3f2b9-2dbf-4c35-b1cd-02be51f58e38-kube-api-access-crhw2\") pod \"test-operator-controller-manager-7866795846-t7hwz\" (UID: \"67e3f2b9-2dbf-4c35-b1cd-02be51f58e38\") " pod="openstack-operators/test-operator-controller-manager-7866795846-t7hwz" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.638237 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dszdp" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.643444 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.649628 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.647920 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dth6\" (UniqueName: \"kubernetes.io/projected/bc4bb4fd-bcc8-438b-af84-a2db3d3e346a-kube-api-access-7dth6\") pod \"swift-operator-controller-manager-68f46476f-snrw6\" (UID: \"bc4bb4fd-bcc8-438b-af84-a2db3d3e346a\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-snrw6" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.655574 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgtlx\" (UniqueName: \"kubernetes.io/projected/d72a97fb-2a6a-4af1-8f0c-de88ab679119-kube-api-access-dgtlx\") pod \"telemetry-operator-controller-manager-55dcdcc8d-49t56\" (UID: \"d72a97fb-2a6a-4af1-8f0c-de88ab679119\") " pod="openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.657014 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-whvgl" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.694679 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m96xf\" (UniqueName: \"kubernetes.io/projected/82e5dbee-ab1e-498c-9460-be75226afa18-kube-api-access-m96xf\") pod \"watcher-operator-controller-manager-5db88f68c-6d9jj\" (UID: \"82e5dbee-ab1e-498c-9460-be75226afa18\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.710086 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-snrw6" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.748255 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.760222 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.761376 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.773533 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-zz8bp" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.773732 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.773850 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.783228 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.795926 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m96xf\" (UniqueName: \"kubernetes.io/projected/82e5dbee-ab1e-498c-9460-be75226afa18-kube-api-access-m96xf\") pod \"watcher-operator-controller-manager-5db88f68c-6d9jj\" (UID: \"82e5dbee-ab1e-498c-9460-be75226afa18\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.796947 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64cmk\" (UniqueName: \"kubernetes.io/projected/c83fa345-043f-453c-b797-a00db3111d44-kube-api-access-64cmk\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.797063 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.797192 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.841162 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m96xf\" (UniqueName: \"kubernetes.io/projected/82e5dbee-ab1e-498c-9460-be75226afa18-kube-api-access-m96xf\") pod \"watcher-operator-controller-manager-5db88f68c-6d9jj\" (UID: \"82e5dbee-ab1e-498c-9460-be75226afa18\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.855688 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-87pdl"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.866176 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-87pdl" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.871011 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-jclfs" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.919843 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64cmk\" (UniqueName: \"kubernetes.io/projected/c83fa345-043f-453c-b797-a00db3111d44-kube-api-access-64cmk\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.919944 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.920010 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.920106 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds89f\" (UniqueName: \"kubernetes.io/projected/c38fa6a1-63b1-44a2-82b8-d6fd3d8a1f8d-kube-api-access-ds89f\") pod \"rabbitmq-cluster-operator-manager-668c99d594-87pdl\" (UID: \"c38fa6a1-63b1-44a2-82b8-d6fd3d8a1f8d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-87pdl" Feb 14 04:27:34 crc kubenswrapper[4867]: E0214 04:27:34.921665 4867 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 14 04:27:34 crc kubenswrapper[4867]: E0214 04:27:34.921725 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs podName:c83fa345-043f-453c-b797-a00db3111d44 nodeName:}" failed. No retries permitted until 2026-02-14 04:27:35.421709862 +0000 UTC m=+1087.502647176 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs") pod "openstack-operator-controller-manager-75585db5cc-kzk25" (UID: "c83fa345-043f-453c-b797-a00db3111d44") : secret "metrics-server-cert" not found Feb 14 04:27:34 crc kubenswrapper[4867]: E0214 04:27:34.922432 4867 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 14 04:27:34 crc kubenswrapper[4867]: E0214 04:27:34.922468 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs podName:c83fa345-043f-453c-b797-a00db3111d44 nodeName:}" failed. No retries permitted until 2026-02-14 04:27:35.422455562 +0000 UTC m=+1087.503392876 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs") pod "openstack-operator-controller-manager-75585db5cc-kzk25" (UID: "c83fa345-043f-453c-b797-a00db3111d44") : secret "webhook-server-cert" not found Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.922495 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-87pdl"] Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.939070 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.959467 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64cmk\" (UniqueName: \"kubernetes.io/projected/c83fa345-043f-453c-b797-a00db3111d44-kube-api-access-64cmk\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.963288 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56" Feb 14 04:27:34 crc kubenswrapper[4867]: W0214 04:27:34.964606 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod652d3b74_0634_4f8f_b5ef_3adfc53920eb.slice/crio-52edf9eab01e212f143162fbc14ac778a09a8eaf3df72d6c10af306a3d505f28 WatchSource:0}: Error finding container 52edf9eab01e212f143162fbc14ac778a09a8eaf3df72d6c10af306a3d505f28: Status 404 returned error can't find the container with id 52edf9eab01e212f143162fbc14ac778a09a8eaf3df72d6c10af306a3d505f28 Feb 14 04:27:34 crc kubenswrapper[4867]: I0214 04:27:34.984015 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-t7hwz" Feb 14 04:27:35 crc kubenswrapper[4867]: I0214 04:27:35.010528 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj" Feb 14 04:27:35 crc kubenswrapper[4867]: I0214 04:27:35.022212 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l"] Feb 14 04:27:35 crc kubenswrapper[4867]: I0214 04:27:35.022470 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds89f\" (UniqueName: \"kubernetes.io/projected/c38fa6a1-63b1-44a2-82b8-d6fd3d8a1f8d-kube-api-access-ds89f\") pod \"rabbitmq-cluster-operator-manager-668c99d594-87pdl\" (UID: \"c38fa6a1-63b1-44a2-82b8-d6fd3d8a1f8d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-87pdl" Feb 14 04:27:35 crc kubenswrapper[4867]: I0214 04:27:35.022580 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t\" (UID: \"634f9e2f-2100-49e3-a31f-a369cf8ff13f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 04:27:35 crc kubenswrapper[4867]: E0214 04:27:35.022729 4867 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 04:27:35 crc kubenswrapper[4867]: E0214 04:27:35.022769 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert podName:634f9e2f-2100-49e3-a31f-a369cf8ff13f nodeName:}" failed. No retries permitted until 2026-02-14 04:27:36.022755925 +0000 UTC m=+1088.103693239 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" (UID: "634f9e2f-2100-49e3-a31f-a369cf8ff13f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 04:27:35 crc kubenswrapper[4867]: I0214 04:27:35.048742 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds89f\" (UniqueName: \"kubernetes.io/projected/c38fa6a1-63b1-44a2-82b8-d6fd3d8a1f8d-kube-api-access-ds89f\") pod \"rabbitmq-cluster-operator-manager-668c99d594-87pdl\" (UID: \"c38fa6a1-63b1-44a2-82b8-d6fd3d8a1f8d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-87pdl" Feb 14 04:27:35 crc kubenswrapper[4867]: I0214 04:27:35.075499 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-87pdl" Feb 14 04:27:35 crc kubenswrapper[4867]: I0214 04:27:35.233452 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert\") pod \"infra-operator-controller-manager-79d975b745-jqq2w\" (UID: \"ebee5651-7233-4c18-bb97-a4dc91eabef4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 04:27:35 crc kubenswrapper[4867]: E0214 04:27:35.234479 4867 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 14 04:27:35 crc kubenswrapper[4867]: E0214 04:27:35.234579 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert podName:ebee5651-7233-4c18-bb97-a4dc91eabef4 nodeName:}" failed. No retries permitted until 2026-02-14 04:27:37.234560842 +0000 UTC m=+1089.315498156 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert") pod "infra-operator-controller-manager-79d975b745-jqq2w" (UID: "ebee5651-7233-4c18-bb97-a4dc91eabef4") : secret "infra-operator-webhook-server-cert" not found Feb 14 04:27:35 crc kubenswrapper[4867]: I0214 04:27:35.442286 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:35 crc kubenswrapper[4867]: I0214 04:27:35.442361 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:35 crc kubenswrapper[4867]: E0214 04:27:35.442519 4867 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 14 04:27:35 crc kubenswrapper[4867]: E0214 04:27:35.442553 4867 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 14 04:27:35 crc kubenswrapper[4867]: E0214 04:27:35.442601 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs podName:c83fa345-043f-453c-b797-a00db3111d44 nodeName:}" failed. No retries permitted until 2026-02-14 04:27:36.442578352 +0000 UTC m=+1088.523515666 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs") pod "openstack-operator-controller-manager-75585db5cc-kzk25" (UID: "c83fa345-043f-453c-b797-a00db3111d44") : secret "metrics-server-cert" not found Feb 14 04:27:35 crc kubenswrapper[4867]: E0214 04:27:35.442670 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs podName:c83fa345-043f-453c-b797-a00db3111d44 nodeName:}" failed. No retries permitted until 2026-02-14 04:27:36.442642683 +0000 UTC m=+1088.523580027 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs") pod "openstack-operator-controller-manager-75585db5cc-kzk25" (UID: "c83fa345-043f-453c-b797-a00db3111d44") : secret "webhook-server-cert" not found Feb 14 04:27:35 crc kubenswrapper[4867]: I0214 04:27:35.465913 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l" event={"ID":"652d3b74-0634-4f8f-b5ef-3adfc53920eb","Type":"ContainerStarted","Data":"52edf9eab01e212f143162fbc14ac778a09a8eaf3df72d6c10af306a3d505f28"} Feb 14 04:27:35 crc kubenswrapper[4867]: I0214 04:27:35.534368 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl"] Feb 14 04:27:35 crc kubenswrapper[4867]: I0214 04:27:35.549569 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bgznq"] Feb 14 04:27:35 crc kubenswrapper[4867]: I0214 04:27:35.571190 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn"] Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.055444 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t\" (UID: \"634f9e2f-2100-49e3-a31f-a369cf8ff13f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 04:27:36 crc kubenswrapper[4867]: E0214 04:27:36.055655 4867 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 04:27:36 crc kubenswrapper[4867]: E0214 04:27:36.055723 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert podName:634f9e2f-2100-49e3-a31f-a369cf8ff13f nodeName:}" failed. No retries permitted until 2026-02-14 04:27:38.055691256 +0000 UTC m=+1090.136628570 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" (UID: "634f9e2f-2100-49e3-a31f-a369cf8ff13f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.312984 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp"] Feb 14 04:27:36 crc kubenswrapper[4867]: W0214 04:27:36.345818 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94ff35ef_77e1_4085_ad2f_837ebc666b2a.slice/crio-4e640445f8d68ccf2f3516341efbf9e3412a5ec236840dccefaf9a4c3a5386c9 WatchSource:0}: Error finding container 4e640445f8d68ccf2f3516341efbf9e3412a5ec236840dccefaf9a4c3a5386c9: Status 404 returned error can't find the container with id 4e640445f8d68ccf2f3516341efbf9e3412a5ec236840dccefaf9a4c3a5386c9 Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.411275 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d"] Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.417156 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5"] Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.430325 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp"] Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.444260 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m"] Feb 14 04:27:36 crc kubenswrapper[4867]: W0214 04:27:36.450831 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bb6de63_3c92_43de_a01b_b34df765aeba.slice/crio-84bc1182291f959caf0fbd7b52cd6048d5ffc97b45d13e117dd68228ef852863 WatchSource:0}: Error finding container 84bc1182291f959caf0fbd7b52cd6048d5ffc97b45d13e117dd68228ef852863: Status 404 returned error can't find the container with id 84bc1182291f959caf0fbd7b52cd6048d5ffc97b45d13e117dd68228ef852863 Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.465457 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.465557 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:36 crc kubenswrapper[4867]: E0214 04:27:36.465887 4867 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 14 04:27:36 crc kubenswrapper[4867]: E0214 04:27:36.465962 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs podName:c83fa345-043f-453c-b797-a00db3111d44 nodeName:}" failed. No retries permitted until 2026-02-14 04:27:38.465941835 +0000 UTC m=+1090.546879149 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs") pod "openstack-operator-controller-manager-75585db5cc-kzk25" (UID: "c83fa345-043f-453c-b797-a00db3111d44") : secret "webhook-server-cert" not found Feb 14 04:27:36 crc kubenswrapper[4867]: E0214 04:27:36.466089 4867 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 14 04:27:36 crc kubenswrapper[4867]: E0214 04:27:36.466115 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs podName:c83fa345-043f-453c-b797-a00db3111d44 nodeName:}" failed. No retries permitted until 2026-02-14 04:27:38.466107349 +0000 UTC m=+1090.547044663 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs") pod "openstack-operator-controller-manager-75585db5cc-kzk25" (UID: "c83fa345-043f-453c-b797-a00db3111d44") : secret "metrics-server-cert" not found Feb 14 04:27:36 crc kubenswrapper[4867]: W0214 04:27:36.470094 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc65ca0c_1d72_468f_b600_dfb8332bf4bd.slice/crio-d1c0923e9066cbdbb8acdffba82d421dd7e7c0c5b5873387483cb09db6b8223d WatchSource:0}: Error finding container d1c0923e9066cbdbb8acdffba82d421dd7e7c0c5b5873387483cb09db6b8223d: Status 404 returned error can't find the container with id d1c0923e9066cbdbb8acdffba82d421dd7e7c0c5b5873387483cb09db6b8223d Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.484873 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd"] Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.486727 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl" event={"ID":"3025ff58-4a91-43f5-8f15-94cadd0cef8b","Type":"ContainerStarted","Data":"3486d63008881850141f3c6801e5de370335935b8a0c2fd4f6e6473dfca53257"} Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.490383 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" event={"ID":"94ff35ef-77e1-4085-ad2f-837ebc666b2a","Type":"ContainerStarted","Data":"4e640445f8d68ccf2f3516341efbf9e3412a5ec236840dccefaf9a4c3a5386c9"} Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.491821 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp" event={"ID":"6b5078d9-f30f-40a8-b5b5-8eb11271ec10","Type":"ContainerStarted","Data":"70004c12d7ec2f7c3fbf5ed65f2704ce433ec9e7b6e632f35b08e2734c5129ab"} Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.492751 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2"] Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.495605 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bgznq" event={"ID":"4b75df5b-04e5-445f-8d2d-57c6cbe5971c","Type":"ContainerStarted","Data":"c090158e55241f0f12ac4546db79eb2cccfa1075841accaaaefe07be84fabef6"} Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.497252 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2" event={"ID":"185d4fd5-608b-48d8-8731-27e7a05adfe2","Type":"ContainerStarted","Data":"c397c8163fa1ede506dad697514827cc45774b1109508546a439953b13268236"} Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.499383 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d" event={"ID":"66c8a0dd-f076-4994-bd42-39c80de83233","Type":"ContainerStarted","Data":"1767024aea7b6d6a4618042325dd23bbba1b2c218958dcb948aefcbed3993a01"} Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.501040 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd" event={"ID":"38a9cdf3-42e2-4279-8092-af7e8c82bc51","Type":"ContainerStarted","Data":"468f8b506ff6171da44b28b3b05f6da5a38aba9184f679530ec1d4c9ba71fdfd"} Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.502094 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn" event={"ID":"1f889f7b-8ae5-43e3-ab54-d3bf06c010df","Type":"ContainerStarted","Data":"0dd50f2f66fa11dad74488744124afd939b227ecebb16740df7975f37dd8b6e0"} Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.503678 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m" event={"ID":"7bb6de63-3c92-43de-a01b-b34df765aeba","Type":"ContainerStarted","Data":"84bc1182291f959caf0fbd7b52cd6048d5ffc97b45d13e117dd68228ef852863"} Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.852226 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz"] Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.882708 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-snrw6"] Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.893292 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz"] Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.940245 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-dszdp"] Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.966458 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-t7hwz"] Feb 14 04:27:36 crc kubenswrapper[4867]: I0214 04:27:36.973421 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg"] Feb 14 04:27:37 crc kubenswrapper[4867]: W0214 04:27:37.049528 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffb00aaf_6760_440e_827a_f795baf3693a.slice/crio-2d25dceeaacba429fb54ce0c37d77b73ab889bcae69443c380a1172186bade07 WatchSource:0}: Error finding container 2d25dceeaacba429fb54ce0c37d77b73ab889bcae69443c380a1172186bade07: Status 404 returned error can't find the container with id 2d25dceeaacba429fb54ce0c37d77b73ab889bcae69443c380a1172186bade07 Feb 14 04:27:37 crc kubenswrapper[4867]: I0214 04:27:37.121398 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-87pdl"] Feb 14 04:27:37 crc kubenswrapper[4867]: I0214 04:27:37.158779 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56"] Feb 14 04:27:37 crc kubenswrapper[4867]: I0214 04:27:37.195638 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj"] Feb 14 04:27:37 crc kubenswrapper[4867]: I0214 04:27:37.304414 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert\") pod \"infra-operator-controller-manager-79d975b745-jqq2w\" (UID: \"ebee5651-7233-4c18-bb97-a4dc91eabef4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 04:27:37 crc kubenswrapper[4867]: E0214 04:27:37.304744 4867 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 14 04:27:37 crc kubenswrapper[4867]: E0214 04:27:37.304801 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert podName:ebee5651-7233-4c18-bb97-a4dc91eabef4 nodeName:}" failed. No retries permitted until 2026-02-14 04:27:41.304782516 +0000 UTC m=+1093.385719830 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert") pod "infra-operator-controller-manager-79d975b745-jqq2w" (UID: "ebee5651-7233-4c18-bb97-a4dc91eabef4") : secret "infra-operator-webhook-server-cert" not found Feb 14 04:27:37 crc kubenswrapper[4867]: E0214 04:27:37.310316 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.32:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dgtlx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-55dcdcc8d-49t56_openstack-operators(d72a97fb-2a6a-4af1-8f0c-de88ab679119): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 14 04:27:37 crc kubenswrapper[4867]: E0214 04:27:37.313835 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56" podUID="d72a97fb-2a6a-4af1-8f0c-de88ab679119" Feb 14 04:27:37 crc kubenswrapper[4867]: I0214 04:27:37.534890 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-87pdl" event={"ID":"c38fa6a1-63b1-44a2-82b8-d6fd3d8a1f8d","Type":"ContainerStarted","Data":"ec8538ea905e098132cd0a4606ca455df5793f551ee0ffc05f39aeaefa2a5afd"} Feb 14 04:27:37 crc kubenswrapper[4867]: I0214 04:27:37.553406 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj" event={"ID":"82e5dbee-ab1e-498c-9460-be75226afa18","Type":"ContainerStarted","Data":"bbed95620b33275c0700efbeb8a76ea9636171c1539b4dc8b4d7dce7ae4bc3fb"} Feb 14 04:27:37 crc kubenswrapper[4867]: I0214 04:27:37.560342 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56" event={"ID":"d72a97fb-2a6a-4af1-8f0c-de88ab679119","Type":"ContainerStarted","Data":"aed5e5c714e4a9c1e168c59ce610a5ecbbc01db9fcb895fd3688ee465aacf1ce"} Feb 14 04:27:37 crc kubenswrapper[4867]: E0214 04:27:37.563376 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.32:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56" podUID="d72a97fb-2a6a-4af1-8f0c-de88ab679119" Feb 14 04:27:37 crc kubenswrapper[4867]: I0214 04:27:37.563698 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-snrw6" event={"ID":"bc4bb4fd-bcc8-438b-af84-a2db3d3e346a","Type":"ContainerStarted","Data":"451971d4e6eb23f775ab700a3e8168a3b4894c06cec5fb806095d84b4e098b02"} Feb 14 04:27:37 crc kubenswrapper[4867]: I0214 04:27:37.565390 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" event={"ID":"dc65ca0c-1d72-468f-b600-dfb8332bf4bd","Type":"ContainerStarted","Data":"d1c0923e9066cbdbb8acdffba82d421dd7e7c0c5b5873387483cb09db6b8223d"} Feb 14 04:27:37 crc kubenswrapper[4867]: I0214 04:27:37.589245 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg" event={"ID":"74a43e5b-11c4-459d-bbc7-03aa03489f17","Type":"ContainerStarted","Data":"0cc8aac57799d65f4415381fa51edae43efec455f3302fead56283b6071fefac"} Feb 14 04:27:37 crc kubenswrapper[4867]: I0214 04:27:37.593391 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz" event={"ID":"9ec66be5-3947-45d1-bf34-c7639e8d4c8a","Type":"ContainerStarted","Data":"fd3f56a56f7735e4753c75e480b745ffcfcb6e579b6d15d338b096ed0bb3f044"} Feb 14 04:27:37 crc kubenswrapper[4867]: I0214 04:27:37.602482 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dszdp" event={"ID":"ffb00aaf-6760-440e-827a-f795baf3693a","Type":"ContainerStarted","Data":"2d25dceeaacba429fb54ce0c37d77b73ab889bcae69443c380a1172186bade07"} Feb 14 04:27:37 crc kubenswrapper[4867]: I0214 04:27:37.608431 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-t7hwz" event={"ID":"67e3f2b9-2dbf-4c35-b1cd-02be51f58e38","Type":"ContainerStarted","Data":"ba835ce0379d301618433cd283b5f5bdf8901d8b2297bb3ea4165c0b7992dc57"} Feb 14 04:27:37 crc kubenswrapper[4867]: I0214 04:27:37.610036 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" event={"ID":"64ff8480-2ca0-40d5-b5c9-448d0db3c575","Type":"ContainerStarted","Data":"ae9e1c6041f00c8d2d1988f72b26402f126a810ecf255e0e707a4a679e15e711"} Feb 14 04:27:38 crc kubenswrapper[4867]: I0214 04:27:38.140535 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t\" (UID: \"634f9e2f-2100-49e3-a31f-a369cf8ff13f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 04:27:38 crc kubenswrapper[4867]: E0214 04:27:38.140907 4867 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 04:27:38 crc kubenswrapper[4867]: E0214 04:27:38.141066 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert podName:634f9e2f-2100-49e3-a31f-a369cf8ff13f nodeName:}" failed. No retries permitted until 2026-02-14 04:27:42.141041491 +0000 UTC m=+1094.221978805 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" (UID: "634f9e2f-2100-49e3-a31f-a369cf8ff13f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 04:27:38 crc kubenswrapper[4867]: I0214 04:27:38.555330 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:38 crc kubenswrapper[4867]: I0214 04:27:38.555425 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:38 crc kubenswrapper[4867]: E0214 04:27:38.555783 4867 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 14 04:27:38 crc kubenswrapper[4867]: E0214 04:27:38.555861 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs podName:c83fa345-043f-453c-b797-a00db3111d44 nodeName:}" failed. No retries permitted until 2026-02-14 04:27:42.555839466 +0000 UTC m=+1094.636776780 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs") pod "openstack-operator-controller-manager-75585db5cc-kzk25" (UID: "c83fa345-043f-453c-b797-a00db3111d44") : secret "webhook-server-cert" not found Feb 14 04:27:38 crc kubenswrapper[4867]: E0214 04:27:38.556326 4867 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 14 04:27:38 crc kubenswrapper[4867]: E0214 04:27:38.556363 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs podName:c83fa345-043f-453c-b797-a00db3111d44 nodeName:}" failed. No retries permitted until 2026-02-14 04:27:42.556353839 +0000 UTC m=+1094.637291153 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs") pod "openstack-operator-controller-manager-75585db5cc-kzk25" (UID: "c83fa345-043f-453c-b797-a00db3111d44") : secret "metrics-server-cert" not found Feb 14 04:27:38 crc kubenswrapper[4867]: E0214 04:27:38.643992 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.32:5001/openstack-k8s-operators/telemetry-operator:49fb0a393e644ad55559f09981950c6ee3a56dc1\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56" podUID="d72a97fb-2a6a-4af1-8f0c-de88ab679119" Feb 14 04:27:41 crc kubenswrapper[4867]: I0214 04:27:41.338203 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert\") pod \"infra-operator-controller-manager-79d975b745-jqq2w\" (UID: \"ebee5651-7233-4c18-bb97-a4dc91eabef4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 04:27:41 crc kubenswrapper[4867]: E0214 04:27:41.338884 4867 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 14 04:27:41 crc kubenswrapper[4867]: E0214 04:27:41.338935 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert podName:ebee5651-7233-4c18-bb97-a4dc91eabef4 nodeName:}" failed. No retries permitted until 2026-02-14 04:27:49.338920793 +0000 UTC m=+1101.419858107 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert") pod "infra-operator-controller-manager-79d975b745-jqq2w" (UID: "ebee5651-7233-4c18-bb97-a4dc91eabef4") : secret "infra-operator-webhook-server-cert" not found Feb 14 04:27:42 crc kubenswrapper[4867]: I0214 04:27:42.150828 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t\" (UID: \"634f9e2f-2100-49e3-a31f-a369cf8ff13f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 04:27:42 crc kubenswrapper[4867]: E0214 04:27:42.151063 4867 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 04:27:42 crc kubenswrapper[4867]: E0214 04:27:42.151237 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert podName:634f9e2f-2100-49e3-a31f-a369cf8ff13f nodeName:}" failed. No retries permitted until 2026-02-14 04:27:50.151221348 +0000 UTC m=+1102.232158662 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" (UID: "634f9e2f-2100-49e3-a31f-a369cf8ff13f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 14 04:27:42 crc kubenswrapper[4867]: I0214 04:27:42.559278 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:42 crc kubenswrapper[4867]: I0214 04:27:42.559487 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:42 crc kubenswrapper[4867]: E0214 04:27:42.559418 4867 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 14 04:27:42 crc kubenswrapper[4867]: E0214 04:27:42.559642 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs podName:c83fa345-043f-453c-b797-a00db3111d44 nodeName:}" failed. No retries permitted until 2026-02-14 04:27:50.559626588 +0000 UTC m=+1102.640563902 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs") pod "openstack-operator-controller-manager-75585db5cc-kzk25" (UID: "c83fa345-043f-453c-b797-a00db3111d44") : secret "metrics-server-cert" not found Feb 14 04:27:42 crc kubenswrapper[4867]: E0214 04:27:42.560173 4867 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 14 04:27:42 crc kubenswrapper[4867]: E0214 04:27:42.560213 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs podName:c83fa345-043f-453c-b797-a00db3111d44 nodeName:}" failed. No retries permitted until 2026-02-14 04:27:50.560204273 +0000 UTC m=+1102.641141577 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs") pod "openstack-operator-controller-manager-75585db5cc-kzk25" (UID: "c83fa345-043f-453c-b797-a00db3111d44") : secret "webhook-server-cert" not found Feb 14 04:27:49 crc kubenswrapper[4867]: E0214 04:27:49.406482 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979" Feb 14 04:27:49 crc kubenswrapper[4867]: E0214 04:27:49.407184 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jt5nx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-5d946d989d-chbgl_openstack-operators(3025ff58-4a91-43f5-8f15-94cadd0cef8b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:27:49 crc kubenswrapper[4867]: E0214 04:27:49.408406 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl" podUID="3025ff58-4a91-43f5-8f15-94cadd0cef8b" Feb 14 04:27:49 crc kubenswrapper[4867]: I0214 04:27:49.410995 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert\") pod \"infra-operator-controller-manager-79d975b745-jqq2w\" (UID: \"ebee5651-7233-4c18-bb97-a4dc91eabef4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 04:27:49 crc kubenswrapper[4867]: I0214 04:27:49.418915 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ebee5651-7233-4c18-bb97-a4dc91eabef4-cert\") pod \"infra-operator-controller-manager-79d975b745-jqq2w\" (UID: \"ebee5651-7233-4c18-bb97-a4dc91eabef4\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 04:27:49 crc kubenswrapper[4867]: I0214 04:27:49.466650 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 04:27:49 crc kubenswrapper[4867]: E0214 04:27:49.767343 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl" podUID="3025ff58-4a91-43f5-8f15-94cadd0cef8b" Feb 14 04:27:50 crc kubenswrapper[4867]: I0214 04:27:50.225465 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t\" (UID: \"634f9e2f-2100-49e3-a31f-a369cf8ff13f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 04:27:50 crc kubenswrapper[4867]: I0214 04:27:50.230058 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/634f9e2f-2100-49e3-a31f-a369cf8ff13f-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t\" (UID: \"634f9e2f-2100-49e3-a31f-a369cf8ff13f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 04:27:50 crc kubenswrapper[4867]: I0214 04:27:50.281714 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 04:27:50 crc kubenswrapper[4867]: I0214 04:27:50.632922 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:50 crc kubenswrapper[4867]: I0214 04:27:50.634232 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:50 crc kubenswrapper[4867]: E0214 04:27:50.634457 4867 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 14 04:27:50 crc kubenswrapper[4867]: E0214 04:27:50.634570 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs podName:c83fa345-043f-453c-b797-a00db3111d44 nodeName:}" failed. No retries permitted until 2026-02-14 04:28:06.634544655 +0000 UTC m=+1118.715481979 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs") pod "openstack-operator-controller-manager-75585db5cc-kzk25" (UID: "c83fa345-043f-453c-b797-a00db3111d44") : secret "webhook-server-cert" not found Feb 14 04:27:50 crc kubenswrapper[4867]: I0214 04:27:50.637326 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-metrics-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:27:51 crc kubenswrapper[4867]: E0214 04:27:51.483266 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" Feb 14 04:27:51 crc kubenswrapper[4867]: E0214 04:27:51.483998 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nd8nr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-54f6768c69-8dzwp_openstack-operators(6b5078d9-f30f-40a8-b5b5-8eb11271ec10): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:27:51 crc kubenswrapper[4867]: E0214 04:27:51.485213 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp" podUID="6b5078d9-f30f-40a8-b5b5-8eb11271ec10" Feb 14 04:27:51 crc kubenswrapper[4867]: E0214 04:27:51.781592 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c\\\"\"" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp" podUID="6b5078d9-f30f-40a8-b5b5-8eb11271ec10" Feb 14 04:27:51 crc kubenswrapper[4867]: E0214 04:27:51.986919 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" Feb 14 04:27:51 crc kubenswrapper[4867]: E0214 04:27:51.987085 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-crhw2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-t7hwz_openstack-operators(67e3f2b9-2dbf-4c35-b1cd-02be51f58e38): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:27:51 crc kubenswrapper[4867]: E0214 04:27:51.988426 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-7866795846-t7hwz" podUID="67e3f2b9-2dbf-4c35-b1cd-02be51f58e38" Feb 14 04:27:52 crc kubenswrapper[4867]: E0214 04:27:52.788175 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-t7hwz" podUID="67e3f2b9-2dbf-4c35-b1cd-02be51f58e38" Feb 14 04:27:54 crc kubenswrapper[4867]: E0214 04:27:54.969839 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" Feb 14 04:27:54 crc kubenswrapper[4867]: E0214 04:27:54.970434 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vnw7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69f49c598c-jxpv2_openstack-operators(185d4fd5-608b-48d8-8731-27e7a05adfe2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:27:54 crc kubenswrapper[4867]: E0214 04:27:54.972073 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2" podUID="185d4fd5-608b-48d8-8731-27e7a05adfe2" Feb 14 04:27:55 crc kubenswrapper[4867]: E0214 04:27:55.745452 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0" Feb 14 04:27:55 crc kubenswrapper[4867]: E0214 04:27:55.745850 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m96xf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-6d9jj_openstack-operators(82e5dbee-ab1e-498c-9460-be75226afa18): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:27:55 crc kubenswrapper[4867]: E0214 04:27:55.747563 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj" podUID="82e5dbee-ab1e-498c-9460-be75226afa18" Feb 14 04:27:55 crc kubenswrapper[4867]: E0214 04:27:55.812661 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2" podUID="185d4fd5-608b-48d8-8731-27e7a05adfe2" Feb 14 04:27:55 crc kubenswrapper[4867]: E0214 04:27:55.813006 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj" podUID="82e5dbee-ab1e-498c-9460-be75226afa18" Feb 14 04:27:56 crc kubenswrapper[4867]: E0214 04:27:56.411062 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df" Feb 14 04:27:56 crc kubenswrapper[4867]: E0214 04:27:56.411288 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gntpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987464f4-tpfxn_openstack-operators(1f889f7b-8ae5-43e3-ab54-d3bf06c010df): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:27:56 crc kubenswrapper[4867]: E0214 04:27:56.412518 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn" podUID="1f889f7b-8ae5-43e3-ab54-d3bf06c010df" Feb 14 04:27:56 crc kubenswrapper[4867]: E0214 04:27:56.837949 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df\\\"\"" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn" podUID="1f889f7b-8ae5-43e3-ab54-d3bf06c010df" Feb 14 04:27:57 crc kubenswrapper[4867]: E0214 04:27:57.711566 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" Feb 14 04:27:57 crc kubenswrapper[4867]: E0214 04:27:57.712389 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kjptq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-64ddbf8bb-2xwdd_openstack-operators(38a9cdf3-42e2-4279-8092-af7e8c82bc51): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:27:57 crc kubenswrapper[4867]: E0214 04:27:57.713636 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd" podUID="38a9cdf3-42e2-4279-8092-af7e8c82bc51" Feb 14 04:27:57 crc kubenswrapper[4867]: E0214 04:27:57.853996 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd" podUID="38a9cdf3-42e2-4279-8092-af7e8c82bc51" Feb 14 04:27:58 crc kubenswrapper[4867]: E0214 04:27:58.480552 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" Feb 14 04:27:58 crc kubenswrapper[4867]: E0214 04:27:58.480922 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7dth6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-snrw6_openstack-operators(bc4bb4fd-bcc8-438b-af84-a2db3d3e346a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:27:58 crc kubenswrapper[4867]: E0214 04:27:58.482298 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-snrw6" podUID="bc4bb4fd-bcc8-438b-af84-a2db3d3e346a" Feb 14 04:27:58 crc kubenswrapper[4867]: E0214 04:27:58.859021 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-snrw6" podUID="bc4bb4fd-bcc8-438b-af84-a2db3d3e346a" Feb 14 04:27:59 crc kubenswrapper[4867]: E0214 04:27:59.000335 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" Feb 14 04:27:59 crc kubenswrapper[4867]: E0214 04:27:59.000824 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bpjl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-554564d7fc-6nhjp_openstack-operators(94ff35ef-77e1-4085-ad2f-837ebc666b2a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:27:59 crc kubenswrapper[4867]: E0214 04:27:59.002011 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" podUID="94ff35ef-77e1-4085-ad2f-837ebc666b2a" Feb 14 04:27:59 crc kubenswrapper[4867]: E0214 04:27:59.542917 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" Feb 14 04:27:59 crc kubenswrapper[4867]: E0214 04:27:59.543084 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w8t7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-868647ff47-pxm8d_openstack-operators(66c8a0dd-f076-4994-bd42-39c80de83233): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:27:59 crc kubenswrapper[4867]: E0214 04:27:59.545008 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d" podUID="66c8a0dd-f076-4994-bd42-39c80de83233" Feb 14 04:27:59 crc kubenswrapper[4867]: E0214 04:27:59.867439 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d" podUID="66c8a0dd-f076-4994-bd42-39c80de83233" Feb 14 04:27:59 crc kubenswrapper[4867]: E0214 04:27:59.867488 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" podUID="94ff35ef-77e1-4085-ad2f-837ebc666b2a" Feb 14 04:27:59 crc kubenswrapper[4867]: E0214 04:27:59.991647 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" Feb 14 04:27:59 crc kubenswrapper[4867]: E0214 04:27:59.992069 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lhjqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-vwvtz_openstack-operators(9ec66be5-3947-45d1-bf34-c7639e8d4c8a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:27:59 crc kubenswrapper[4867]: E0214 04:27:59.993528 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz" podUID="9ec66be5-3947-45d1-bf34-c7639e8d4c8a" Feb 14 04:28:00 crc kubenswrapper[4867]: E0214 04:28:00.445358 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" Feb 14 04:28:00 crc kubenswrapper[4867]: E0214 04:28:00.445592 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-btzhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6994f66f48-wwm9m_openstack-operators(7bb6de63-3c92-43de-a01b-b34df765aeba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:28:00 crc kubenswrapper[4867]: E0214 04:28:00.446812 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m" podUID="7bb6de63-3c92-43de-a01b-b34df765aeba" Feb 14 04:28:00 crc kubenswrapper[4867]: E0214 04:28:00.874054 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz" podUID="9ec66be5-3947-45d1-bf34-c7639e8d4c8a" Feb 14 04:28:00 crc kubenswrapper[4867]: E0214 04:28:00.874406 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m" podUID="7bb6de63-3c92-43de-a01b-b34df765aeba" Feb 14 04:28:01 crc kubenswrapper[4867]: I0214 04:28:01.252608 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:28:01 crc kubenswrapper[4867]: I0214 04:28:01.252684 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:28:01 crc kubenswrapper[4867]: E0214 04:28:01.980698 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" Feb 14 04:28:01 crc kubenswrapper[4867]: E0214 04:28:01.980868 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-trhdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6d8bf5c495-ndb8l_openstack-operators(652d3b74-0634-4f8f-b5ef-3adfc53920eb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:28:01 crc kubenswrapper[4867]: E0214 04:28:01.982027 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l" podUID="652d3b74-0634-4f8f-b5ef-3adfc53920eb" Feb 14 04:28:02 crc kubenswrapper[4867]: E0214 04:28:02.891369 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642\\\"\"" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l" podUID="652d3b74-0634-4f8f-b5ef-3adfc53920eb" Feb 14 04:28:03 crc kubenswrapper[4867]: E0214 04:28:03.130935 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" Feb 14 04:28:03 crc kubenswrapper[4867]: E0214 04:28:03.131316 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dkk9v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-tf6rg_openstack-operators(74a43e5b-11c4-459d-bbc7-03aa03489f17): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:28:03 crc kubenswrapper[4867]: E0214 04:28:03.132812 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg" podUID="74a43e5b-11c4-459d-bbc7-03aa03489f17" Feb 14 04:28:03 crc kubenswrapper[4867]: E0214 04:28:03.898363 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg" podUID="74a43e5b-11c4-459d-bbc7-03aa03489f17" Feb 14 04:28:04 crc kubenswrapper[4867]: E0214 04:28:04.263175 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Feb 14 04:28:04 crc kubenswrapper[4867]: E0214 04:28:04.263695 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ds89f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-87pdl_openstack-operators(c38fa6a1-63b1-44a2-82b8-d6fd3d8a1f8d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:28:04 crc kubenswrapper[4867]: E0214 04:28:04.265047 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-87pdl" podUID="c38fa6a1-63b1-44a2-82b8-d6fd3d8a1f8d" Feb 14 04:28:04 crc kubenswrapper[4867]: W0214 04:28:04.854611 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod634f9e2f_2100_49e3_a31f_a369cf8ff13f.slice/crio-a6d603fa2a233d07377724b48a5e2f32f017c667c1fc3a0a359a71fed8e1a5d2 WatchSource:0}: Error finding container a6d603fa2a233d07377724b48a5e2f32f017c667c1fc3a0a359a71fed8e1a5d2: Status 404 returned error can't find the container with id a6d603fa2a233d07377724b48a5e2f32f017c667c1fc3a0a359a71fed8e1a5d2 Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.856015 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t"] Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.906990 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" event={"ID":"64ff8480-2ca0-40d5-b5c9-448d0db3c575","Type":"ContainerStarted","Data":"dba0773e63253be2ecd558d953c291677c56007f46dc4d0a1851dfa825654812"} Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.907070 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.908926 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56" event={"ID":"d72a97fb-2a6a-4af1-8f0c-de88ab679119","Type":"ContainerStarted","Data":"fed495e34766497dd42cf0325a418ddf77140542a3dce04637259a53eb94b72f"} Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.909589 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56" Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.910559 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" event={"ID":"dc65ca0c-1d72-468f-b600-dfb8332bf4bd","Type":"ContainerStarted","Data":"e88e177c8e3d3815ee6c35934ac281ad46676b4d19ae3457ab25535ae3e922be"} Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.910712 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.912144 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dszdp" event={"ID":"ffb00aaf-6760-440e-827a-f795baf3693a","Type":"ContainerStarted","Data":"c83513991e76903ffa1ba3f5e92920d4dac8235a719191fc8a9e37c60c0a9075"} Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.912297 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dszdp" Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.918987 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-t7hwz" event={"ID":"67e3f2b9-2dbf-4c35-b1cd-02be51f58e38","Type":"ContainerStarted","Data":"67f0608bf77e8453cbdaea86d982c6360c1581bec5e3ea53dc77c4258ce8e77a"} Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.919227 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-t7hwz" Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.927371 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp" event={"ID":"6b5078d9-f30f-40a8-b5b5-8eb11271ec10","Type":"ContainerStarted","Data":"7256c05ae79a737b3cc7955bbdecdf7c386ed5125625a5dea66d06a219c3f123"} Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.927663 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp" Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.931255 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bgznq" event={"ID":"4b75df5b-04e5-445f-8d2d-57c6cbe5971c","Type":"ContainerStarted","Data":"66917816db67d8bf627a0d6b3d12c972b57d5b2fa6cec95cc61d85d0fb783963"} Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.931748 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bgznq" Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.932799 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" event={"ID":"634f9e2f-2100-49e3-a31f-a369cf8ff13f","Type":"ContainerStarted","Data":"a6d603fa2a233d07377724b48a5e2f32f017c667c1fc3a0a359a71fed8e1a5d2"} Feb 14 04:28:04 crc kubenswrapper[4867]: E0214 04:28:04.935357 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-87pdl" podUID="c38fa6a1-63b1-44a2-82b8-d6fd3d8a1f8d" Feb 14 04:28:04 crc kubenswrapper[4867]: I0214 04:28:04.945379 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" podStartSLOduration=4.62802649 podStartE2EDuration="31.945360043s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:36.933399252 +0000 UTC m=+1089.014336556" lastFinishedPulling="2026-02-14 04:28:04.250732795 +0000 UTC m=+1116.331670109" observedRunningTime="2026-02-14 04:28:04.931310437 +0000 UTC m=+1117.012247751" watchObservedRunningTime="2026-02-14 04:28:04.945360043 +0000 UTC m=+1117.026297357" Feb 14 04:28:05 crc kubenswrapper[4867]: I0214 04:28:05.024696 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bgznq" podStartSLOduration=4.454048541 podStartE2EDuration="32.024677696s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:35.572477421 +0000 UTC m=+1087.653414735" lastFinishedPulling="2026-02-14 04:28:03.143106566 +0000 UTC m=+1115.224043890" observedRunningTime="2026-02-14 04:28:05.004314566 +0000 UTC m=+1117.085251880" watchObservedRunningTime="2026-02-14 04:28:05.024677696 +0000 UTC m=+1117.105615010" Feb 14 04:28:05 crc kubenswrapper[4867]: I0214 04:28:05.047762 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w"] Feb 14 04:28:05 crc kubenswrapper[4867]: I0214 04:28:05.054073 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dszdp" podStartSLOduration=4.871434172 podStartE2EDuration="32.05404842s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:37.062127131 +0000 UTC m=+1089.143064445" lastFinishedPulling="2026-02-14 04:28:04.244741379 +0000 UTC m=+1116.325678693" observedRunningTime="2026-02-14 04:28:05.044654226 +0000 UTC m=+1117.125591540" watchObservedRunningTime="2026-02-14 04:28:05.05404842 +0000 UTC m=+1117.134985754" Feb 14 04:28:05 crc kubenswrapper[4867]: I0214 04:28:05.095145 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp" podStartSLOduration=3.83936297 podStartE2EDuration="32.095127399s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:36.448650647 +0000 UTC m=+1088.529587961" lastFinishedPulling="2026-02-14 04:28:04.704415076 +0000 UTC m=+1116.785352390" observedRunningTime="2026-02-14 04:28:05.093725652 +0000 UTC m=+1117.174662966" watchObservedRunningTime="2026-02-14 04:28:05.095127399 +0000 UTC m=+1117.176064713" Feb 14 04:28:05 crc kubenswrapper[4867]: I0214 04:28:05.130162 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-t7hwz" podStartSLOduration=4.6839956879999995 podStartE2EDuration="32.130138369s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:37.049017652 +0000 UTC m=+1089.129954966" lastFinishedPulling="2026-02-14 04:28:04.495160333 +0000 UTC m=+1116.576097647" observedRunningTime="2026-02-14 04:28:05.129310058 +0000 UTC m=+1117.210247372" watchObservedRunningTime="2026-02-14 04:28:05.130138369 +0000 UTC m=+1117.211075673" Feb 14 04:28:05 crc kubenswrapper[4867]: I0214 04:28:05.235917 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" podStartSLOduration=4.461384635 podStartE2EDuration="32.23590205s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:36.475544883 +0000 UTC m=+1088.556482197" lastFinishedPulling="2026-02-14 04:28:04.250062298 +0000 UTC m=+1116.330999612" observedRunningTime="2026-02-14 04:28:05.187214334 +0000 UTC m=+1117.268151648" watchObservedRunningTime="2026-02-14 04:28:05.23590205 +0000 UTC m=+1117.316839354" Feb 14 04:28:05 crc kubenswrapper[4867]: I0214 04:28:05.236341 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56" podStartSLOduration=5.05117993 podStartE2EDuration="32.236336642s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:37.310114264 +0000 UTC m=+1089.391051578" lastFinishedPulling="2026-02-14 04:28:04.495270976 +0000 UTC m=+1116.576208290" observedRunningTime="2026-02-14 04:28:05.232373349 +0000 UTC m=+1117.313310663" watchObservedRunningTime="2026-02-14 04:28:05.236336642 +0000 UTC m=+1117.317273956" Feb 14 04:28:05 crc kubenswrapper[4867]: I0214 04:28:05.940907 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" event={"ID":"ebee5651-7233-4c18-bb97-a4dc91eabef4","Type":"ContainerStarted","Data":"22c74f9f2f8244e121926f179c8afdca6427d769bf911f2aa6fbbf3221939845"} Feb 14 04:28:05 crc kubenswrapper[4867]: I0214 04:28:05.942393 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl" event={"ID":"3025ff58-4a91-43f5-8f15-94cadd0cef8b","Type":"ContainerStarted","Data":"a29228d01cbb6e1a2e7ef06b29313bb44d6874f7c517e0036cafd031ec6c4fc1"} Feb 14 04:28:05 crc kubenswrapper[4867]: I0214 04:28:05.943146 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl" Feb 14 04:28:05 crc kubenswrapper[4867]: I0214 04:28:05.965620 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl" podStartSLOduration=3.012739201 podStartE2EDuration="32.96560246s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:35.574520783 +0000 UTC m=+1087.655458097" lastFinishedPulling="2026-02-14 04:28:05.527384032 +0000 UTC m=+1117.608321356" observedRunningTime="2026-02-14 04:28:05.962221493 +0000 UTC m=+1118.043158807" watchObservedRunningTime="2026-02-14 04:28:05.96560246 +0000 UTC m=+1118.046539774" Feb 14 04:28:06 crc kubenswrapper[4867]: I0214 04:28:06.658695 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:28:06 crc kubenswrapper[4867]: I0214 04:28:06.666135 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c83fa345-043f-453c-b797-a00db3111d44-webhook-certs\") pod \"openstack-operator-controller-manager-75585db5cc-kzk25\" (UID: \"c83fa345-043f-453c-b797-a00db3111d44\") " pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:28:06 crc kubenswrapper[4867]: I0214 04:28:06.843010 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:28:07 crc kubenswrapper[4867]: I0214 04:28:07.363622 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25"] Feb 14 04:28:07 crc kubenswrapper[4867]: W0214 04:28:07.379419 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc83fa345_043f_453c_b797_a00db3111d44.slice/crio-53850596ee7561b746178b85f2b864cdff0e9a820efb5c8fc5bc4f3017f563d6 WatchSource:0}: Error finding container 53850596ee7561b746178b85f2b864cdff0e9a820efb5c8fc5bc4f3017f563d6: Status 404 returned error can't find the container with id 53850596ee7561b746178b85f2b864cdff0e9a820efb5c8fc5bc4f3017f563d6 Feb 14 04:28:07 crc kubenswrapper[4867]: I0214 04:28:07.965003 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" event={"ID":"c83fa345-043f-453c-b797-a00db3111d44","Type":"ContainerStarted","Data":"7a2ee5a9bcad944530f5c6de38ec65cfcb4cfe6b779359783d1bc2456001426a"} Feb 14 04:28:07 crc kubenswrapper[4867]: I0214 04:28:07.965302 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" event={"ID":"c83fa345-043f-453c-b797-a00db3111d44","Type":"ContainerStarted","Data":"53850596ee7561b746178b85f2b864cdff0e9a820efb5c8fc5bc4f3017f563d6"} Feb 14 04:28:07 crc kubenswrapper[4867]: I0214 04:28:07.966390 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:28:08 crc kubenswrapper[4867]: I0214 04:28:08.020095 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" podStartSLOduration=34.020051927 podStartE2EDuration="34.020051927s" podCreationTimestamp="2026-02-14 04:27:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:28:08.008091806 +0000 UTC m=+1120.089029120" watchObservedRunningTime="2026-02-14 04:28:08.020051927 +0000 UTC m=+1120.100989241" Feb 14 04:28:13 crc kubenswrapper[4867]: I0214 04:28:13.510172 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl" Feb 14 04:28:13 crc kubenswrapper[4867]: I0214 04:28:13.705290 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bgznq" Feb 14 04:28:13 crc kubenswrapper[4867]: I0214 04:28:13.942221 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" Feb 14 04:28:14 crc kubenswrapper[4867]: I0214 04:28:14.118250 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp" Feb 14 04:28:14 crc kubenswrapper[4867]: I0214 04:28:14.541142 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" Feb 14 04:28:14 crc kubenswrapper[4867]: I0214 04:28:14.646432 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dszdp" Feb 14 04:28:14 crc kubenswrapper[4867]: I0214 04:28:14.967454 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56" Feb 14 04:28:14 crc kubenswrapper[4867]: I0214 04:28:14.987436 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-t7hwz" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.048064 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2" event={"ID":"185d4fd5-608b-48d8-8731-27e7a05adfe2","Type":"ContainerStarted","Data":"fed09a44d0d668968ebe9709f90b5aa759aebf8092c357413bb704036e8e59ec"} Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.049281 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.051711 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" event={"ID":"ebee5651-7233-4c18-bb97-a4dc91eabef4","Type":"ContainerStarted","Data":"d3beb0e27719410f426cff5b15244494a4ee7c2cbce0eb2198cf7ca641696505"} Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.052007 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.055937 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn" event={"ID":"1f889f7b-8ae5-43e3-ab54-d3bf06c010df","Type":"ContainerStarted","Data":"8894d7e55a88068670ef6806a3ba8242e721063ca543ab0b1eb958d616bd6830"} Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.056237 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.063622 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m" event={"ID":"7bb6de63-3c92-43de-a01b-b34df765aeba","Type":"ContainerStarted","Data":"d87e3587e2f3ef22cff7675f9ef30627896f6c5f50a0d16d8ccdc5839a94ae83"} Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.064355 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.072742 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d" event={"ID":"66c8a0dd-f076-4994-bd42-39c80de83233","Type":"ContainerStarted","Data":"5037d0e368dde2b99b3c5a944e803df1b58c708c2c477ef6e42307397ba217ea"} Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.073651 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.091469 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-snrw6" event={"ID":"bc4bb4fd-bcc8-438b-af84-a2db3d3e346a","Type":"ContainerStarted","Data":"e3063da35fa7215aaed10a458603d6c94495363580003e6a0e6a48e4a1367801"} Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.091928 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-snrw6" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.104851 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd" event={"ID":"38a9cdf3-42e2-4279-8092-af7e8c82bc51","Type":"ContainerStarted","Data":"14ca97f9db879083cb331bf07f6fc278f12ca99a6d001aa8f050bf341b95ecb0"} Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.105753 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.106092 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d" podStartSLOduration=4.090180098 podStartE2EDuration="42.106076927s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:36.358206189 +0000 UTC m=+1088.439143503" lastFinishedPulling="2026-02-14 04:28:14.374103018 +0000 UTC m=+1126.455040332" observedRunningTime="2026-02-14 04:28:15.106030536 +0000 UTC m=+1127.186967870" watchObservedRunningTime="2026-02-14 04:28:15.106076927 +0000 UTC m=+1127.187014241" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.119780 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" event={"ID":"94ff35ef-77e1-4085-ad2f-837ebc666b2a","Type":"ContainerStarted","Data":"56f2401d817967e7dfc249d99a2014932b93916388d466d645c9c4c84aa46aab"} Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.120695 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.136124 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" event={"ID":"634f9e2f-2100-49e3-a31f-a369cf8ff13f","Type":"ContainerStarted","Data":"403136f34a075ecd6d7c5c8a094d619a3f5e7e071fa96a3e6040cda845a2f86f"} Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.136897 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.140653 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn" podStartSLOduration=3.342405857 podStartE2EDuration="42.140634626s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:35.575700554 +0000 UTC m=+1087.656637868" lastFinishedPulling="2026-02-14 04:28:14.373929323 +0000 UTC m=+1126.454866637" observedRunningTime="2026-02-14 04:28:15.131049226 +0000 UTC m=+1127.211986540" watchObservedRunningTime="2026-02-14 04:28:15.140634626 +0000 UTC m=+1127.221571940" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.145966 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l" event={"ID":"652d3b74-0634-4f8f-b5ef-3adfc53920eb","Type":"ContainerStarted","Data":"b7b2b14eb03ea6bf5916f1c07b3ad2754d1387e5fed0b42455928cd802f75d69"} Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.146419 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.169735 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m" podStartSLOduration=4.030531109 podStartE2EDuration="42.169670791s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:36.47310912 +0000 UTC m=+1088.554046434" lastFinishedPulling="2026-02-14 04:28:14.612248802 +0000 UTC m=+1126.693186116" observedRunningTime="2026-02-14 04:28:15.16154867 +0000 UTC m=+1127.242485994" watchObservedRunningTime="2026-02-14 04:28:15.169670791 +0000 UTC m=+1127.250608105" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.193916 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2" podStartSLOduration=4.291797685 podStartE2EDuration="42.193900191s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:36.473066489 +0000 UTC m=+1088.554003803" lastFinishedPulling="2026-02-14 04:28:14.375168995 +0000 UTC m=+1126.456106309" observedRunningTime="2026-02-14 04:28:15.188815189 +0000 UTC m=+1127.269752503" watchObservedRunningTime="2026-02-14 04:28:15.193900191 +0000 UTC m=+1127.274837505" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.224790 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" podStartSLOduration=33.015922539 podStartE2EDuration="42.224773034s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:28:05.066990727 +0000 UTC m=+1117.147928041" lastFinishedPulling="2026-02-14 04:28:14.275841222 +0000 UTC m=+1126.356778536" observedRunningTime="2026-02-14 04:28:15.218615454 +0000 UTC m=+1127.299552768" watchObservedRunningTime="2026-02-14 04:28:15.224773034 +0000 UTC m=+1127.305710348" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.245550 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd" podStartSLOduration=4.348013786 podStartE2EDuration="42.245528314s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:36.478101829 +0000 UTC m=+1088.559039143" lastFinishedPulling="2026-02-14 04:28:14.375616357 +0000 UTC m=+1126.456553671" observedRunningTime="2026-02-14 04:28:15.237241738 +0000 UTC m=+1127.318179042" watchObservedRunningTime="2026-02-14 04:28:15.245528314 +0000 UTC m=+1127.326465628" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.273382 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l" podStartSLOduration=2.6954447630000002 podStartE2EDuration="42.273360488s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:35.020860076 +0000 UTC m=+1087.101797390" lastFinishedPulling="2026-02-14 04:28:14.598775811 +0000 UTC m=+1126.679713115" observedRunningTime="2026-02-14 04:28:15.269305442 +0000 UTC m=+1127.350242756" watchObservedRunningTime="2026-02-14 04:28:15.273360488 +0000 UTC m=+1127.354297802" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.314446 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-snrw6" podStartSLOduration=4.971096393 podStartE2EDuration="42.314420976s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:36.933427073 +0000 UTC m=+1089.014364387" lastFinishedPulling="2026-02-14 04:28:14.276751646 +0000 UTC m=+1126.357688970" observedRunningTime="2026-02-14 04:28:15.309048826 +0000 UTC m=+1127.389986140" watchObservedRunningTime="2026-02-14 04:28:15.314420976 +0000 UTC m=+1127.395358290" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.349289 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" podStartSLOduration=4.372369008 podStartE2EDuration="42.349268102s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:36.35862668 +0000 UTC m=+1088.439563994" lastFinishedPulling="2026-02-14 04:28:14.335525774 +0000 UTC m=+1126.416463088" observedRunningTime="2026-02-14 04:28:15.348251616 +0000 UTC m=+1127.429188930" watchObservedRunningTime="2026-02-14 04:28:15.349268102 +0000 UTC m=+1127.430205416" Feb 14 04:28:15 crc kubenswrapper[4867]: I0214 04:28:15.392659 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" podStartSLOduration=33.067462609 podStartE2EDuration="42.39263601s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:28:04.857480847 +0000 UTC m=+1116.938418161" lastFinishedPulling="2026-02-14 04:28:14.182654248 +0000 UTC m=+1126.263591562" observedRunningTime="2026-02-14 04:28:15.385743011 +0000 UTC m=+1127.466680345" watchObservedRunningTime="2026-02-14 04:28:15.39263601 +0000 UTC m=+1127.473573324" Feb 14 04:28:16 crc kubenswrapper[4867]: I0214 04:28:16.230019 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj" event={"ID":"82e5dbee-ab1e-498c-9460-be75226afa18","Type":"ContainerStarted","Data":"e4b9247c8e6be527ef2a9a0b9af8b49146d28bd377bed746f75902fbf11841a2"} Feb 14 04:28:16 crc kubenswrapper[4867]: I0214 04:28:16.232210 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj" Feb 14 04:28:16 crc kubenswrapper[4867]: I0214 04:28:16.252328 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj" podStartSLOduration=6.248007481 podStartE2EDuration="43.252303481s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:37.371715037 +0000 UTC m=+1089.452652351" lastFinishedPulling="2026-02-14 04:28:14.376011037 +0000 UTC m=+1126.456948351" observedRunningTime="2026-02-14 04:28:16.247122696 +0000 UTC m=+1128.328060020" watchObservedRunningTime="2026-02-14 04:28:16.252303481 +0000 UTC m=+1128.333240795" Feb 14 04:28:16 crc kubenswrapper[4867]: I0214 04:28:16.849549 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 04:28:17 crc kubenswrapper[4867]: I0214 04:28:17.237210 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz" event={"ID":"9ec66be5-3947-45d1-bf34-c7639e8d4c8a","Type":"ContainerStarted","Data":"0240b976b25ccf1c053a870ea138e0a0e957fc5c1bfb9682d6269c052b9ba2d5"} Feb 14 04:28:17 crc kubenswrapper[4867]: I0214 04:28:17.237635 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz" Feb 14 04:28:17 crc kubenswrapper[4867]: I0214 04:28:17.238803 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg" event={"ID":"74a43e5b-11c4-459d-bbc7-03aa03489f17","Type":"ContainerStarted","Data":"a01ba509f7a52344ad900a86ef39c3df54f080f47bbaad35cc8747cba870531b"} Feb 14 04:28:17 crc kubenswrapper[4867]: I0214 04:28:17.239778 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg" Feb 14 04:28:17 crc kubenswrapper[4867]: I0214 04:28:17.256192 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz" podStartSLOduration=5.101790526 podStartE2EDuration="44.256160672s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:37.049169116 +0000 UTC m=+1089.130106430" lastFinishedPulling="2026-02-14 04:28:16.203539262 +0000 UTC m=+1128.284476576" observedRunningTime="2026-02-14 04:28:17.250950966 +0000 UTC m=+1129.331888290" watchObservedRunningTime="2026-02-14 04:28:17.256160672 +0000 UTC m=+1129.337097996" Feb 14 04:28:17 crc kubenswrapper[4867]: I0214 04:28:17.279011 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg" podStartSLOduration=5.086305589 podStartE2EDuration="44.278994716s" podCreationTimestamp="2026-02-14 04:27:33 +0000 UTC" firstStartedPulling="2026-02-14 04:27:37.008691679 +0000 UTC m=+1089.089628993" lastFinishedPulling="2026-02-14 04:28:16.201380806 +0000 UTC m=+1128.282318120" observedRunningTime="2026-02-14 04:28:17.272497267 +0000 UTC m=+1129.353434591" watchObservedRunningTime="2026-02-14 04:28:17.278994716 +0000 UTC m=+1129.359932030" Feb 14 04:28:19 crc kubenswrapper[4867]: I0214 04:28:19.476231 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 04:28:20 crc kubenswrapper[4867]: I0214 04:28:20.268463 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-87pdl" event={"ID":"c38fa6a1-63b1-44a2-82b8-d6fd3d8a1f8d","Type":"ContainerStarted","Data":"0f79bed42d7427fc6fb8fd280b968295c72ddab44991fb6bd63a312b21582ecc"} Feb 14 04:28:20 crc kubenswrapper[4867]: I0214 04:28:20.290766 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 04:28:20 crc kubenswrapper[4867]: I0214 04:28:20.297408 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-87pdl" podStartSLOduration=4.080734356 podStartE2EDuration="46.297383444s" podCreationTimestamp="2026-02-14 04:27:34 +0000 UTC" firstStartedPulling="2026-02-14 04:27:37.255736918 +0000 UTC m=+1089.336674222" lastFinishedPulling="2026-02-14 04:28:19.472385946 +0000 UTC m=+1131.553323310" observedRunningTime="2026-02-14 04:28:20.286585784 +0000 UTC m=+1132.367523138" watchObservedRunningTime="2026-02-14 04:28:20.297383444 +0000 UTC m=+1132.378320758" Feb 14 04:28:23 crc kubenswrapper[4867]: I0214 04:28:23.492677 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d" Feb 14 04:28:23 crc kubenswrapper[4867]: I0214 04:28:23.531107 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l" Feb 14 04:28:23 crc kubenswrapper[4867]: I0214 04:28:23.605919 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn" Feb 14 04:28:23 crc kubenswrapper[4867]: I0214 04:28:23.684157 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2" Feb 14 04:28:24 crc kubenswrapper[4867]: I0214 04:28:24.087079 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" Feb 14 04:28:24 crc kubenswrapper[4867]: I0214 04:28:24.172251 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd" Feb 14 04:28:24 crc kubenswrapper[4867]: I0214 04:28:24.211427 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m" Feb 14 04:28:24 crc kubenswrapper[4867]: I0214 04:28:24.395795 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg" Feb 14 04:28:24 crc kubenswrapper[4867]: I0214 04:28:24.714793 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-snrw6" Feb 14 04:28:24 crc kubenswrapper[4867]: I0214 04:28:24.942550 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz" Feb 14 04:28:25 crc kubenswrapper[4867]: I0214 04:28:25.020183 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj" Feb 14 04:28:31 crc kubenswrapper[4867]: I0214 04:28:31.250937 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:28:31 crc kubenswrapper[4867]: I0214 04:28:31.251526 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.381323 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-s87hs"] Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.386291 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-s87hs" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.392933 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-s87hs"] Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.393092 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.393106 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-4thnd" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.393161 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.393256 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.465847 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-z692n"] Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.467780 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-z692n" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.476439 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.477995 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-959cx\" (UniqueName: \"kubernetes.io/projected/1e9ddba3-128d-4025-9661-b07c5e1e9329-kube-api-access-959cx\") pod \"dnsmasq-dns-675f4bcbfc-s87hs\" (UID: \"1e9ddba3-128d-4025-9661-b07c5e1e9329\") " pod="openstack/dnsmasq-dns-675f4bcbfc-s87hs" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.478316 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e9ddba3-128d-4025-9661-b07c5e1e9329-config\") pod \"dnsmasq-dns-675f4bcbfc-s87hs\" (UID: \"1e9ddba3-128d-4025-9661-b07c5e1e9329\") " pod="openstack/dnsmasq-dns-675f4bcbfc-s87hs" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.487241 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-z692n"] Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.579671 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-959cx\" (UniqueName: \"kubernetes.io/projected/1e9ddba3-128d-4025-9661-b07c5e1e9329-kube-api-access-959cx\") pod \"dnsmasq-dns-675f4bcbfc-s87hs\" (UID: \"1e9ddba3-128d-4025-9661-b07c5e1e9329\") " pod="openstack/dnsmasq-dns-675f4bcbfc-s87hs" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.579822 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e9ddba3-128d-4025-9661-b07c5e1e9329-config\") pod \"dnsmasq-dns-675f4bcbfc-s87hs\" (UID: \"1e9ddba3-128d-4025-9661-b07c5e1e9329\") " pod="openstack/dnsmasq-dns-675f4bcbfc-s87hs" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.579863 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-config\") pod \"dnsmasq-dns-78dd6ddcc-z692n\" (UID: \"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z692n" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.579889 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-z692n\" (UID: \"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z692n" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.579917 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4khb7\" (UniqueName: \"kubernetes.io/projected/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-kube-api-access-4khb7\") pod \"dnsmasq-dns-78dd6ddcc-z692n\" (UID: \"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z692n" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.581150 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e9ddba3-128d-4025-9661-b07c5e1e9329-config\") pod \"dnsmasq-dns-675f4bcbfc-s87hs\" (UID: \"1e9ddba3-128d-4025-9661-b07c5e1e9329\") " pod="openstack/dnsmasq-dns-675f4bcbfc-s87hs" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.619165 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-959cx\" (UniqueName: \"kubernetes.io/projected/1e9ddba3-128d-4025-9661-b07c5e1e9329-kube-api-access-959cx\") pod \"dnsmasq-dns-675f4bcbfc-s87hs\" (UID: \"1e9ddba3-128d-4025-9661-b07c5e1e9329\") " pod="openstack/dnsmasq-dns-675f4bcbfc-s87hs" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.681610 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-config\") pod \"dnsmasq-dns-78dd6ddcc-z692n\" (UID: \"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z692n" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.682005 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-z692n\" (UID: \"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z692n" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.682050 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4khb7\" (UniqueName: \"kubernetes.io/projected/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-kube-api-access-4khb7\") pod \"dnsmasq-dns-78dd6ddcc-z692n\" (UID: \"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z692n" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.683244 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-config\") pod \"dnsmasq-dns-78dd6ddcc-z692n\" (UID: \"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z692n" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.683284 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-z692n\" (UID: \"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z692n" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.705440 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4khb7\" (UniqueName: \"kubernetes.io/projected/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-kube-api-access-4khb7\") pod \"dnsmasq-dns-78dd6ddcc-z692n\" (UID: \"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3\") " pod="openstack/dnsmasq-dns-78dd6ddcc-z692n" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.720914 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-s87hs" Feb 14 04:28:43 crc kubenswrapper[4867]: I0214 04:28:43.831283 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-z692n" Feb 14 04:28:44 crc kubenswrapper[4867]: I0214 04:28:44.407743 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-s87hs"] Feb 14 04:28:44 crc kubenswrapper[4867]: I0214 04:28:44.486101 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-z692n"] Feb 14 04:28:44 crc kubenswrapper[4867]: W0214 04:28:44.488827 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a87cc0d_e74a_4be2_9ac2_7f9d565f34e3.slice/crio-6f6ab74e4cce4dfd48bee7b02d98ab6f158452369ea200d7d42632fd7db4659d WatchSource:0}: Error finding container 6f6ab74e4cce4dfd48bee7b02d98ab6f158452369ea200d7d42632fd7db4659d: Status 404 returned error can't find the container with id 6f6ab74e4cce4dfd48bee7b02d98ab6f158452369ea200d7d42632fd7db4659d Feb 14 04:28:44 crc kubenswrapper[4867]: I0214 04:28:44.522009 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-z692n" event={"ID":"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3","Type":"ContainerStarted","Data":"6f6ab74e4cce4dfd48bee7b02d98ab6f158452369ea200d7d42632fd7db4659d"} Feb 14 04:28:44 crc kubenswrapper[4867]: I0214 04:28:44.523101 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-s87hs" event={"ID":"1e9ddba3-128d-4025-9661-b07c5e1e9329","Type":"ContainerStarted","Data":"37ef45b02f432cc3a119a40a28ecba25608ab0780e37295f9063ae35dc630718"} Feb 14 04:28:45 crc kubenswrapper[4867]: I0214 04:28:45.705582 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-s87hs"] Feb 14 04:28:45 crc kubenswrapper[4867]: I0214 04:28:45.729920 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lbzlt"] Feb 14 04:28:45 crc kubenswrapper[4867]: I0214 04:28:45.735167 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" Feb 14 04:28:45 crc kubenswrapper[4867]: I0214 04:28:45.749353 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lbzlt"] Feb 14 04:28:45 crc kubenswrapper[4867]: I0214 04:28:45.847050 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j8f4\" (UniqueName: \"kubernetes.io/projected/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-kube-api-access-7j8f4\") pod \"dnsmasq-dns-5ccc8479f9-lbzlt\" (UID: \"dbe41be0-f7f8-47ff-a587-b85e282fa5ee\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" Feb 14 04:28:45 crc kubenswrapper[4867]: I0214 04:28:45.849632 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-lbzlt\" (UID: \"dbe41be0-f7f8-47ff-a587-b85e282fa5ee\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" Feb 14 04:28:45 crc kubenswrapper[4867]: I0214 04:28:45.849705 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-config\") pod \"dnsmasq-dns-5ccc8479f9-lbzlt\" (UID: \"dbe41be0-f7f8-47ff-a587-b85e282fa5ee\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" Feb 14 04:28:45 crc kubenswrapper[4867]: I0214 04:28:45.951261 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j8f4\" (UniqueName: \"kubernetes.io/projected/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-kube-api-access-7j8f4\") pod \"dnsmasq-dns-5ccc8479f9-lbzlt\" (UID: \"dbe41be0-f7f8-47ff-a587-b85e282fa5ee\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" Feb 14 04:28:45 crc kubenswrapper[4867]: I0214 04:28:45.951317 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-lbzlt\" (UID: \"dbe41be0-f7f8-47ff-a587-b85e282fa5ee\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" Feb 14 04:28:45 crc kubenswrapper[4867]: I0214 04:28:45.951337 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-config\") pod \"dnsmasq-dns-5ccc8479f9-lbzlt\" (UID: \"dbe41be0-f7f8-47ff-a587-b85e282fa5ee\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" Feb 14 04:28:45 crc kubenswrapper[4867]: I0214 04:28:45.952307 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-lbzlt\" (UID: \"dbe41be0-f7f8-47ff-a587-b85e282fa5ee\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" Feb 14 04:28:45 crc kubenswrapper[4867]: I0214 04:28:45.952412 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-config\") pod \"dnsmasq-dns-5ccc8479f9-lbzlt\" (UID: \"dbe41be0-f7f8-47ff-a587-b85e282fa5ee\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" Feb 14 04:28:45 crc kubenswrapper[4867]: I0214 04:28:45.995476 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j8f4\" (UniqueName: \"kubernetes.io/projected/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-kube-api-access-7j8f4\") pod \"dnsmasq-dns-5ccc8479f9-lbzlt\" (UID: \"dbe41be0-f7f8-47ff-a587-b85e282fa5ee\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.075793 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.391361 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-z692n"] Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.443179 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hxkz7"] Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.445183 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.459592 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hxkz7"] Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.580078 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj27v\" (UniqueName: \"kubernetes.io/projected/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-kube-api-access-xj27v\") pod \"dnsmasq-dns-57d769cc4f-hxkz7\" (UID: \"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c\") " pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.580273 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-config\") pod \"dnsmasq-dns-57d769cc4f-hxkz7\" (UID: \"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c\") " pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.580407 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-hxkz7\" (UID: \"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c\") " pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.705754 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj27v\" (UniqueName: \"kubernetes.io/projected/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-kube-api-access-xj27v\") pod \"dnsmasq-dns-57d769cc4f-hxkz7\" (UID: \"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c\") " pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.705844 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-config\") pod \"dnsmasq-dns-57d769cc4f-hxkz7\" (UID: \"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c\") " pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.705905 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-hxkz7\" (UID: \"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c\") " pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.706901 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-hxkz7\" (UID: \"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c\") " pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.707346 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-config\") pod \"dnsmasq-dns-57d769cc4f-hxkz7\" (UID: \"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c\") " pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.739422 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj27v\" (UniqueName: \"kubernetes.io/projected/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-kube-api-access-xj27v\") pod \"dnsmasq-dns-57d769cc4f-hxkz7\" (UID: \"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c\") " pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.793542 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.861597 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.863861 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.875040 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.875439 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.875615 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-7gx8s" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.875677 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.875818 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.875878 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.875824 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 14 04:28:46 crc kubenswrapper[4867]: I0214 04:28:46.892422 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.010709 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.011075 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.011115 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e1e022d9-e2db-41eb-bbc8-36a85211a141-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.011146 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e1e022d9-e2db-41eb-bbc8-36a85211a141-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.011294 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.011338 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.011425 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.011522 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.011556 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrf6j\" (UniqueName: \"kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-kube-api-access-wrf6j\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.011660 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.011730 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.038331 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lbzlt"] Feb 14 04:28:47 crc kubenswrapper[4867]: W0214 04:28:47.055767 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbe41be0_f7f8_47ff_a587_b85e282fa5ee.slice/crio-e2cde2f1b32b51340fd763b4ceea3517b72f2d3aa4a0cc5b4b3855a816cd999b WatchSource:0}: Error finding container e2cde2f1b32b51340fd763b4ceea3517b72f2d3aa4a0cc5b4b3855a816cd999b: Status 404 returned error can't find the container with id e2cde2f1b32b51340fd763b4ceea3517b72f2d3aa4a0cc5b4b3855a816cd999b Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.113850 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.113920 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.113961 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.113993 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.114040 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e1e022d9-e2db-41eb-bbc8-36a85211a141-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.114091 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e1e022d9-e2db-41eb-bbc8-36a85211a141-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.114115 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.114132 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.114166 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.114209 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.114240 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrf6j\" (UniqueName: \"kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-kube-api-access-wrf6j\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.115910 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.117437 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.117887 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.118803 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.120805 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.122455 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e1e022d9-e2db-41eb-bbc8-36a85211a141-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.122819 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e1e022d9-e2db-41eb-bbc8-36a85211a141-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.124249 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.124282 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7c81ba883a06ca9e019b2d7c726ddbfb519b81827f5cfcee1e25c00752814b8f/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.132069 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.144656 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.148740 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrf6j\" (UniqueName: \"kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-kube-api-access-wrf6j\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.194748 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\") pod \"rabbitmq-cell1-server-0\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.208719 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:28:47 crc kubenswrapper[4867]: I0214 04:28:47.360718 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hxkz7"] Feb 14 04:28:47 crc kubenswrapper[4867]: W0214 04:28:47.377941 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5f0e82b_f765_4fe1_b74e_856e1a6d8b8c.slice/crio-9892bc720311d5c087d97016222dedfbfd5d79d98d86d65c02c43134fdd42239 WatchSource:0}: Error finding container 9892bc720311d5c087d97016222dedfbfd5d79d98d86d65c02c43134fdd42239: Status 404 returned error can't find the container with id 9892bc720311d5c087d97016222dedfbfd5d79d98d86d65c02c43134fdd42239 Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.549875 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.552703 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.558327 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.558463 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.558565 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.558681 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.558734 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.558795 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.563282 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-xwq4z" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.586745 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.597969 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.600072 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.620333 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.622044 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.623458 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.623495 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kp9g\" (UniqueName: \"kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-kube-api-access-6kp9g\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.623638 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.623675 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/647ba30a-5526-4e27-9095-680c31ff4eb3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.623765 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.623784 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/647ba30a-5526-4e27-9095-680c31ff4eb3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.623803 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.623951 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.623983 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.624006 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.624023 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-config-data\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.635973 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" event={"ID":"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c","Type":"ContainerStarted","Data":"9892bc720311d5c087d97016222dedfbfd5d79d98d86d65c02c43134fdd42239"} Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.637634 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" event={"ID":"dbe41be0-f7f8-47ff-a587-b85e282fa5ee","Type":"ContainerStarted","Data":"e2cde2f1b32b51340fd763b4ceea3517b72f2d3aa4a0cc5b4b3855a816cd999b"} Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.661793 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.678597 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726014 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d997565a-60ec-4873-b7c9-bde8044c981f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d997565a-60ec-4873-b7c9-bde8044c981f\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726054 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726078 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q676p\" (UniqueName: \"kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-kube-api-access-q676p\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726106 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726127 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/647ba30a-5526-4e27-9095-680c31ff4eb3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726208 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726271 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726358 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726392 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-server-conf\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726429 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726455 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726484 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726516 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726538 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726563 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-config-data\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726585 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-294tk\" (UniqueName: \"kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-kube-api-access-294tk\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726622 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-config-data\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726636 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726654 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9bba5174-edd6-4e59-8b84-6c50439be88e-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726709 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6bc83863-74f4-4509-969c-0f3305a542a8-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726732 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6bc83863-74f4-4509-969c-0f3305a542a8-pod-info\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726750 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726780 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9bba5174-edd6-4e59-8b84-6c50439be88e-pod-info\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726815 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kp9g\" (UniqueName: \"kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-kube-api-access-6kp9g\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726854 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726911 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726928 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-server-conf\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.726965 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-64ab6375-8d81-46bd-80ba-b738c813923f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64ab6375-8d81-46bd-80ba-b738c813923f\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.727015 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.727058 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.727097 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-config-data\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.727129 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/647ba30a-5526-4e27-9095-680c31ff4eb3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.727150 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.728211 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-config-data\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.728687 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.729257 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.730370 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-server-conf\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.730607 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.733458 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.733518 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b6ecbc127793ccdba0f55c49c319b455a0b3bdad6043979264d9c6d7f92205d3/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.734429 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/647ba30a-5526-4e27-9095-680c31ff4eb3-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.735064 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/647ba30a-5526-4e27-9095-680c31ff4eb3-pod-info\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.735480 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.738553 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.745235 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kp9g\" (UniqueName: \"kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-kube-api-access-6kp9g\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.769407 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.779111 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\") pod \"rabbitmq-server-0\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: W0214 04:28:47.779462 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1e022d9_e2db_41eb_bbc8_36a85211a141.slice/crio-eff48d6ea9b314940f4e42275756ed44177eec1f24e83d25c5b5fe5435a8ea2e WatchSource:0}: Error finding container eff48d6ea9b314940f4e42275756ed44177eec1f24e83d25c5b5fe5435a8ea2e: Status 404 returned error can't find the container with id eff48d6ea9b314940f4e42275756ed44177eec1f24e83d25c5b5fe5435a8ea2e Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836402 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836463 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-server-conf\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836520 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-64ab6375-8d81-46bd-80ba-b738c813923f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64ab6375-8d81-46bd-80ba-b738c813923f\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836556 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836595 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-config-data\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836627 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836680 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d997565a-60ec-4873-b7c9-bde8044c981f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d997565a-60ec-4873-b7c9-bde8044c981f\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836705 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q676p\" (UniqueName: \"kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-kube-api-access-q676p\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836725 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836761 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836810 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-server-conf\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836841 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836869 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836893 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836919 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-294tk\" (UniqueName: \"kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-kube-api-access-294tk\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836951 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-config-data\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836973 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.836991 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9bba5174-edd6-4e59-8b84-6c50439be88e-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.837035 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6bc83863-74f4-4509-969c-0f3305a542a8-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.837060 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6bc83863-74f4-4509-969c-0f3305a542a8-pod-info\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.837080 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9bba5174-edd6-4e59-8b84-6c50439be88e-pod-info\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.837114 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.837659 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.839845 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-config-data\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.841182 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-server-conf\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.842262 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.842586 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.844498 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-server-conf\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.844721 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.845134 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.845366 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.852182 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.852219 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d997565a-60ec-4873-b7c9-bde8044c981f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d997565a-60ec-4873-b7c9-bde8044c981f\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/03d7bcff7c5d0322515cfcd29e48bfb1d0d6f9021316ba38c2028cf5ce82afee/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.853719 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.854120 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.854139 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-64ab6375-8d81-46bd-80ba-b738c813923f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64ab6375-8d81-46bd-80ba-b738c813923f\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/55ff7cc17667ae9e120da2b34de2e1baed28e5c0bfceac7c1699349f36759e58/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.855497 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6bc83863-74f4-4509-969c-0f3305a542a8-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.860708 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-config-data\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.862885 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.863755 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9bba5174-edd6-4e59-8b84-6c50439be88e-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.865753 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-294tk\" (UniqueName: \"kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-kube-api-access-294tk\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.866160 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6bc83863-74f4-4509-969c-0f3305a542a8-pod-info\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.866182 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.866622 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9bba5174-edd6-4e59-8b84-6c50439be88e-pod-info\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.867486 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.868825 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q676p\" (UniqueName: \"kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-kube-api-access-q676p\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.917202 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:47.978944 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-64ab6375-8d81-46bd-80ba-b738c813923f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64ab6375-8d81-46bd-80ba-b738c813923f\") pod \"rabbitmq-server-1\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:48.004030 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d997565a-60ec-4873-b7c9-bde8044c981f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d997565a-60ec-4873-b7c9-bde8044c981f\") pod \"rabbitmq-server-2\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:48.054972 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:48.250284 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:48.655137 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e1e022d9-e2db-41eb-bbc8-36a85211a141","Type":"ContainerStarted","Data":"eff48d6ea9b314940f4e42275756ed44177eec1f24e83d25c5b5fe5435a8ea2e"} Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:48.975691 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:48.979942 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:48.983293 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-xbw69" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:48.988610 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:48.988765 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:48.988860 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 14 04:28:48 crc kubenswrapper[4867]: I0214 04:28:48.991883 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.068418 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.068458 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.103242 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b27199a8-11ac-4e59-90b8-b42387dd6dd2-kolla-config\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.103279 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b27199a8-11ac-4e59-90b8-b42387dd6dd2-config-data-default\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.103355 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-04ed5daa-c5d1-498b-a709-6e4af0a0932b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04ed5daa-c5d1-498b-a709-6e4af0a0932b\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.103411 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b27199a8-11ac-4e59-90b8-b42387dd6dd2-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.103446 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b27199a8-11ac-4e59-90b8-b42387dd6dd2-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.103484 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b27199a8-11ac-4e59-90b8-b42387dd6dd2-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.103538 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24d97\" (UniqueName: \"kubernetes.io/projected/b27199a8-11ac-4e59-90b8-b42387dd6dd2-kube-api-access-24d97\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.103561 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b27199a8-11ac-4e59-90b8-b42387dd6dd2-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.190034 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.208229 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-04ed5daa-c5d1-498b-a709-6e4af0a0932b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04ed5daa-c5d1-498b-a709-6e4af0a0932b\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.209282 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b27199a8-11ac-4e59-90b8-b42387dd6dd2-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.209426 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b27199a8-11ac-4e59-90b8-b42387dd6dd2-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.209466 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b27199a8-11ac-4e59-90b8-b42387dd6dd2-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.209539 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24d97\" (UniqueName: \"kubernetes.io/projected/b27199a8-11ac-4e59-90b8-b42387dd6dd2-kube-api-access-24d97\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.209576 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b27199a8-11ac-4e59-90b8-b42387dd6dd2-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.209746 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b27199a8-11ac-4e59-90b8-b42387dd6dd2-kolla-config\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.209767 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b27199a8-11ac-4e59-90b8-b42387dd6dd2-config-data-default\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.210695 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b27199a8-11ac-4e59-90b8-b42387dd6dd2-config-data-default\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.213055 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b27199a8-11ac-4e59-90b8-b42387dd6dd2-kolla-config\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.213355 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b27199a8-11ac-4e59-90b8-b42387dd6dd2-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.214418 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b27199a8-11ac-4e59-90b8-b42387dd6dd2-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.215193 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.215266 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-04ed5daa-c5d1-498b-a709-6e4af0a0932b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04ed5daa-c5d1-498b-a709-6e4af0a0932b\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/407d6ce299045fd326f604e987d7292806f389f36b0aa734b66f6d28c6aa64a2/globalmount\"" pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.219591 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b27199a8-11ac-4e59-90b8-b42387dd6dd2-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.242719 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b27199a8-11ac-4e59-90b8-b42387dd6dd2-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.254664 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24d97\" (UniqueName: \"kubernetes.io/projected/b27199a8-11ac-4e59-90b8-b42387dd6dd2-kube-api-access-24d97\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.376583 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-04ed5daa-c5d1-498b-a709-6e4af0a0932b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-04ed5daa-c5d1-498b-a709-6e4af0a0932b\") pod \"openstack-galera-0\" (UID: \"b27199a8-11ac-4e59-90b8-b42387dd6dd2\") " pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.392250 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.631902 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.738884 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"6bc83863-74f4-4509-969c-0f3305a542a8","Type":"ContainerStarted","Data":"6d2235a75be13119e9c9aa74a5f3a2e2f13d32b41febb3b537fd57f955f1f8bc"} Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.770820 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"647ba30a-5526-4e27-9095-680c31ff4eb3","Type":"ContainerStarted","Data":"3dfa840147a64ccb967653d642c377ae9470c558827d87830014de26dfbf1136"} Feb 14 04:28:49 crc kubenswrapper[4867]: I0214 04:28:49.803260 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9bba5174-edd6-4e59-8b84-6c50439be88e","Type":"ContainerStarted","Data":"1a22c1b816602c7a9c207095a5f963d6cce2df715e59142c62ec1b7539b424fc"} Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.512382 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.776399 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.778590 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.783173 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.783473 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.783775 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-ffdkm" Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.784454 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.788769 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.810804 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.817338 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.829101 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.829352 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-rs22h" Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.832061 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.874125 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b27199a8-11ac-4e59-90b8-b42387dd6dd2","Type":"ContainerStarted","Data":"6e3034a330e6e973a85a9955386cad48ddcbda0e3b4d2bda1bd1c14a5a4e9067"} Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.880187 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.906970 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-68542d8a-fd27-4c7f-94a6-39cc84f8a109\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-68542d8a-fd27-4c7f-94a6-39cc84f8a109\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.907022 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sm7m\" (UniqueName: \"kubernetes.io/projected/505de461-9e6f-4914-bf50-e2bf4149b566-kube-api-access-7sm7m\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.907051 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505de461-9e6f-4914-bf50-e2bf4149b566-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.907091 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/505de461-9e6f-4914-bf50-e2bf4149b566-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.907122 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/505de461-9e6f-4914-bf50-e2bf4149b566-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.907267 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/505de461-9e6f-4914-bf50-e2bf4149b566-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.907287 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/505de461-9e6f-4914-bf50-e2bf4149b566-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:50 crc kubenswrapper[4867]: I0214 04:28:50.907350 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/505de461-9e6f-4914-bf50-e2bf4149b566-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.008754 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/505de461-9e6f-4914-bf50-e2bf4149b566-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.008817 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk9sq\" (UniqueName: \"kubernetes.io/projected/f1d6dceb-5ee5-407d-ade4-be35d128d8dc-kube-api-access-vk9sq\") pod \"memcached-0\" (UID: \"f1d6dceb-5ee5-407d-ade4-be35d128d8dc\") " pod="openstack/memcached-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.008843 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-68542d8a-fd27-4c7f-94a6-39cc84f8a109\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-68542d8a-fd27-4c7f-94a6-39cc84f8a109\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.008864 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1d6dceb-5ee5-407d-ade4-be35d128d8dc-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f1d6dceb-5ee5-407d-ade4-be35d128d8dc\") " pod="openstack/memcached-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.008893 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sm7m\" (UniqueName: \"kubernetes.io/projected/505de461-9e6f-4914-bf50-e2bf4149b566-kube-api-access-7sm7m\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.008917 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505de461-9e6f-4914-bf50-e2bf4149b566-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.008941 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1d6dceb-5ee5-407d-ade4-be35d128d8dc-config-data\") pod \"memcached-0\" (UID: \"f1d6dceb-5ee5-407d-ade4-be35d128d8dc\") " pod="openstack/memcached-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.008967 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/505de461-9e6f-4914-bf50-e2bf4149b566-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.008992 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d6dceb-5ee5-407d-ade4-be35d128d8dc-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f1d6dceb-5ee5-407d-ade4-be35d128d8dc\") " pod="openstack/memcached-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.010005 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/505de461-9e6f-4914-bf50-e2bf4149b566-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.010094 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f1d6dceb-5ee5-407d-ade4-be35d128d8dc-kolla-config\") pod \"memcached-0\" (UID: \"f1d6dceb-5ee5-407d-ade4-be35d128d8dc\") " pod="openstack/memcached-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.010175 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/505de461-9e6f-4914-bf50-e2bf4149b566-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.010198 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/505de461-9e6f-4914-bf50-e2bf4149b566-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.010630 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/505de461-9e6f-4914-bf50-e2bf4149b566-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.015951 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/505de461-9e6f-4914-bf50-e2bf4149b566-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.016771 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/505de461-9e6f-4914-bf50-e2bf4149b566-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.017047 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/505de461-9e6f-4914-bf50-e2bf4149b566-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.019957 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.019998 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-68542d8a-fd27-4c7f-94a6-39cc84f8a109\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-68542d8a-fd27-4c7f-94a6-39cc84f8a109\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ed8d1f3f2a89d962c8e70e2f9692b177bbfe1fa5bf896782a6497e50ff763e73/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.024986 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/505de461-9e6f-4914-bf50-e2bf4149b566-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.043346 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/505de461-9e6f-4914-bf50-e2bf4149b566-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.048238 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sm7m\" (UniqueName: \"kubernetes.io/projected/505de461-9e6f-4914-bf50-e2bf4149b566-kube-api-access-7sm7m\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.112729 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk9sq\" (UniqueName: \"kubernetes.io/projected/f1d6dceb-5ee5-407d-ade4-be35d128d8dc-kube-api-access-vk9sq\") pod \"memcached-0\" (UID: \"f1d6dceb-5ee5-407d-ade4-be35d128d8dc\") " pod="openstack/memcached-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.112787 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1d6dceb-5ee5-407d-ade4-be35d128d8dc-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f1d6dceb-5ee5-407d-ade4-be35d128d8dc\") " pod="openstack/memcached-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.112845 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1d6dceb-5ee5-407d-ade4-be35d128d8dc-config-data\") pod \"memcached-0\" (UID: \"f1d6dceb-5ee5-407d-ade4-be35d128d8dc\") " pod="openstack/memcached-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.112882 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d6dceb-5ee5-407d-ade4-be35d128d8dc-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f1d6dceb-5ee5-407d-ade4-be35d128d8dc\") " pod="openstack/memcached-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.112965 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f1d6dceb-5ee5-407d-ade4-be35d128d8dc-kolla-config\") pod \"memcached-0\" (UID: \"f1d6dceb-5ee5-407d-ade4-be35d128d8dc\") " pod="openstack/memcached-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.114199 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f1d6dceb-5ee5-407d-ade4-be35d128d8dc-kolla-config\") pod \"memcached-0\" (UID: \"f1d6dceb-5ee5-407d-ade4-be35d128d8dc\") " pod="openstack/memcached-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.114341 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1d6dceb-5ee5-407d-ade4-be35d128d8dc-config-data\") pod \"memcached-0\" (UID: \"f1d6dceb-5ee5-407d-ade4-be35d128d8dc\") " pod="openstack/memcached-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.126698 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1d6dceb-5ee5-407d-ade4-be35d128d8dc-memcached-tls-certs\") pod \"memcached-0\" (UID: \"f1d6dceb-5ee5-407d-ade4-be35d128d8dc\") " pod="openstack/memcached-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.137083 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-68542d8a-fd27-4c7f-94a6-39cc84f8a109\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-68542d8a-fd27-4c7f-94a6-39cc84f8a109\") pod \"openstack-cell1-galera-0\" (UID: \"505de461-9e6f-4914-bf50-e2bf4149b566\") " pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.141068 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d6dceb-5ee5-407d-ade4-be35d128d8dc-combined-ca-bundle\") pod \"memcached-0\" (UID: \"f1d6dceb-5ee5-407d-ade4-be35d128d8dc\") " pod="openstack/memcached-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.142207 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk9sq\" (UniqueName: \"kubernetes.io/projected/f1d6dceb-5ee5-407d-ade4-be35d128d8dc-kube-api-access-vk9sq\") pod \"memcached-0\" (UID: \"f1d6dceb-5ee5-407d-ade4-be35d128d8dc\") " pod="openstack/memcached-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.167986 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 14 04:28:51 crc kubenswrapper[4867]: I0214 04:28:51.429584 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 14 04:28:52 crc kubenswrapper[4867]: I0214 04:28:52.789312 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 14 04:28:52 crc kubenswrapper[4867]: I0214 04:28:52.974782 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f1d6dceb-5ee5-407d-ade4-be35d128d8dc","Type":"ContainerStarted","Data":"a12f5cd207497e1be12c7bcbddd32c2c27c498b8a906cdeac8ccb904fa2f62ed"} Feb 14 04:28:52 crc kubenswrapper[4867]: I0214 04:28:52.978388 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 14 04:28:53 crc kubenswrapper[4867]: I0214 04:28:53.822278 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 04:28:53 crc kubenswrapper[4867]: I0214 04:28:53.826081 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 14 04:28:53 crc kubenswrapper[4867]: I0214 04:28:53.834310 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-tpx28" Feb 14 04:28:53 crc kubenswrapper[4867]: I0214 04:28:53.837926 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 04:28:53 crc kubenswrapper[4867]: I0214 04:28:53.930866 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5zbq\" (UniqueName: \"kubernetes.io/projected/a78fec22-f395-42fc-a228-8d896580bc95-kube-api-access-h5zbq\") pod \"kube-state-metrics-0\" (UID: \"a78fec22-f395-42fc-a228-8d896580bc95\") " pod="openstack/kube-state-metrics-0" Feb 14 04:28:54 crc kubenswrapper[4867]: I0214 04:28:54.033618 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5zbq\" (UniqueName: \"kubernetes.io/projected/a78fec22-f395-42fc-a228-8d896580bc95-kube-api-access-h5zbq\") pod \"kube-state-metrics-0\" (UID: \"a78fec22-f395-42fc-a228-8d896580bc95\") " pod="openstack/kube-state-metrics-0" Feb 14 04:28:54 crc kubenswrapper[4867]: I0214 04:28:54.073698 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"505de461-9e6f-4914-bf50-e2bf4149b566","Type":"ContainerStarted","Data":"cb4b60d6fe1eb81c3db75f2723e52986af5ddbfde223fcc274cdbd671f8e5b99"} Feb 14 04:28:54 crc kubenswrapper[4867]: I0214 04:28:54.081656 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5zbq\" (UniqueName: \"kubernetes.io/projected/a78fec22-f395-42fc-a228-8d896580bc95-kube-api-access-h5zbq\") pod \"kube-state-metrics-0\" (UID: \"a78fec22-f395-42fc-a228-8d896580bc95\") " pod="openstack/kube-state-metrics-0" Feb 14 04:28:54 crc kubenswrapper[4867]: I0214 04:28:54.176224 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 14 04:28:54 crc kubenswrapper[4867]: I0214 04:28:54.708706 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-492b9"] Feb 14 04:28:54 crc kubenswrapper[4867]: I0214 04:28:54.710448 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-492b9" Feb 14 04:28:54 crc kubenswrapper[4867]: I0214 04:28:54.720996 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-wftkz" Feb 14 04:28:54 crc kubenswrapper[4867]: I0214 04:28:54.721243 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Feb 14 04:28:54 crc kubenswrapper[4867]: I0214 04:28:54.748577 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-492b9"] Feb 14 04:28:54 crc kubenswrapper[4867]: I0214 04:28:54.766656 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf2rw\" (UniqueName: \"kubernetes.io/projected/701367b7-aef6-43b5-a0f9-3a91206962de-kube-api-access-kf2rw\") pod \"observability-ui-dashboards-66cbf594b5-492b9\" (UID: \"701367b7-aef6-43b5-a0f9-3a91206962de\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-492b9" Feb 14 04:28:54 crc kubenswrapper[4867]: I0214 04:28:54.766773 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/701367b7-aef6-43b5-a0f9-3a91206962de-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-492b9\" (UID: \"701367b7-aef6-43b5-a0f9-3a91206962de\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-492b9" Feb 14 04:28:54 crc kubenswrapper[4867]: I0214 04:28:54.869693 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf2rw\" (UniqueName: \"kubernetes.io/projected/701367b7-aef6-43b5-a0f9-3a91206962de-kube-api-access-kf2rw\") pod \"observability-ui-dashboards-66cbf594b5-492b9\" (UID: \"701367b7-aef6-43b5-a0f9-3a91206962de\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-492b9" Feb 14 04:28:54 crc kubenswrapper[4867]: I0214 04:28:54.869790 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/701367b7-aef6-43b5-a0f9-3a91206962de-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-492b9\" (UID: \"701367b7-aef6-43b5-a0f9-3a91206962de\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-492b9" Feb 14 04:28:54 crc kubenswrapper[4867]: I0214 04:28:54.905860 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/701367b7-aef6-43b5-a0f9-3a91206962de-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-492b9\" (UID: \"701367b7-aef6-43b5-a0f9-3a91206962de\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-492b9" Feb 14 04:28:54 crc kubenswrapper[4867]: I0214 04:28:54.933067 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf2rw\" (UniqueName: \"kubernetes.io/projected/701367b7-aef6-43b5-a0f9-3a91206962de-kube-api-access-kf2rw\") pod \"observability-ui-dashboards-66cbf594b5-492b9\" (UID: \"701367b7-aef6-43b5-a0f9-3a91206962de\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-492b9" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.084501 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-492b9" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.308998 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-796d588566-h9wcn"] Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.310996 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.386862 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-796d588566-h9wcn"] Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.392149 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/41d35864-bb64-45f3-bc1e-a7d5440c35ad-console-config\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.392192 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/41d35864-bb64-45f3-bc1e-a7d5440c35ad-service-ca\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.392216 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj7ct\" (UniqueName: \"kubernetes.io/projected/41d35864-bb64-45f3-bc1e-a7d5440c35ad-kube-api-access-mj7ct\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.392273 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41d35864-bb64-45f3-bc1e-a7d5440c35ad-trusted-ca-bundle\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.392318 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/41d35864-bb64-45f3-bc1e-a7d5440c35ad-oauth-serving-cert\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.392373 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/41d35864-bb64-45f3-bc1e-a7d5440c35ad-console-oauth-config\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.392395 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/41d35864-bb64-45f3-bc1e-a7d5440c35ad-console-serving-cert\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.442114 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.445118 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.477272 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.477464 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.477596 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-dgxf9" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.477706 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.477816 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.478628 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.478749 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.478897 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.496418 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0eda836b-4d69-49e8-a582-e29da56fd005\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eda836b-4d69-49e8-a582-e29da56fd005\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.496786 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/41d35864-bb64-45f3-bc1e-a7d5440c35ad-console-oauth-config\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.496864 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.496962 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/41d35864-bb64-45f3-bc1e-a7d5440c35ad-console-serving-cert\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.497054 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c755009c-2bb6-4f8f-9b53-460a0e4c9447-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.497117 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c755009c-2bb6-4f8f-9b53-460a0e4c9447-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.497227 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.497355 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/41d35864-bb64-45f3-bc1e-a7d5440c35ad-console-config\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.497441 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/41d35864-bb64-45f3-bc1e-a7d5440c35ad-service-ca\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.497523 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj7ct\" (UniqueName: \"kubernetes.io/projected/41d35864-bb64-45f3-bc1e-a7d5440c35ad-kube-api-access-mj7ct\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.497604 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.497694 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41d35864-bb64-45f3-bc1e-a7d5440c35ad-trusted-ca-bundle\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.497912 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpz8v\" (UniqueName: \"kubernetes.io/projected/c755009c-2bb6-4f8f-9b53-460a0e4c9447-kube-api-access-tpz8v\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.498000 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.498071 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/41d35864-bb64-45f3-bc1e-a7d5440c35ad-oauth-serving-cert\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.498148 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-config\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.498215 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.502647 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/41d35864-bb64-45f3-bc1e-a7d5440c35ad-service-ca\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.505429 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41d35864-bb64-45f3-bc1e-a7d5440c35ad-trusted-ca-bundle\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.505617 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/41d35864-bb64-45f3-bc1e-a7d5440c35ad-oauth-serving-cert\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.506039 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/41d35864-bb64-45f3-bc1e-a7d5440c35ad-console-config\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.565546 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/41d35864-bb64-45f3-bc1e-a7d5440c35ad-console-oauth-config\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.576633 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.586155 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/41d35864-bb64-45f3-bc1e-a7d5440c35ad-console-serving-cert\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.599929 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpz8v\" (UniqueName: \"kubernetes.io/projected/c755009c-2bb6-4f8f-9b53-460a0e4c9447-kube-api-access-tpz8v\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.600008 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.600050 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-config\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.600079 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.600120 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0eda836b-4d69-49e8-a582-e29da56fd005\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eda836b-4d69-49e8-a582-e29da56fd005\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.600174 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.600230 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c755009c-2bb6-4f8f-9b53-460a0e4c9447-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.600255 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c755009c-2bb6-4f8f-9b53-460a0e4c9447-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.600280 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.600355 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.601536 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.602331 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.603163 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj7ct\" (UniqueName: \"kubernetes.io/projected/41d35864-bb64-45f3-bc1e-a7d5440c35ad-kube-api-access-mj7ct\") pod \"console-796d588566-h9wcn\" (UID: \"41d35864-bb64-45f3-bc1e-a7d5440c35ad\") " pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.603675 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.637396 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c755009c-2bb6-4f8f-9b53-460a0e4c9447-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.638167 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.650046 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-config\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.653968 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c755009c-2bb6-4f8f-9b53-460a0e4c9447-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.675405 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.685068 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.699634 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpz8v\" (UniqueName: \"kubernetes.io/projected/c755009c-2bb6-4f8f-9b53-460a0e4c9447-kube-api-access-tpz8v\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.700292 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:28:55 crc kubenswrapper[4867]: I0214 04:28:55.700328 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0eda836b-4d69-49e8-a582-e29da56fd005\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eda836b-4d69-49e8-a582-e29da56fd005\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7c69566d4c941ca8a51b196b92114beed9536eafb9e04e7c441265c9a20c9feb/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.154312 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0eda836b-4d69-49e8-a582-e29da56fd005\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eda836b-4d69-49e8-a582-e29da56fd005\") pod \"prometheus-metric-storage-0\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.275837 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-492b9"] Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.401690 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.503619 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.519945 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-7lpqj"] Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.521815 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.530105 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.530440 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-475js" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.530671 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.552584 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7lpqj"] Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.577803 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-dznst"] Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.584114 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.600205 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-dznst"] Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.638989 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/16c28c0f-9310-4721-87cf-2d1bb88b5bba-var-run-ovn\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.639032 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/16c28c0f-9310-4721-87cf-2d1bb88b5bba-var-log-ovn\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.639072 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c28c0f-9310-4721-87cf-2d1bb88b5bba-combined-ca-bundle\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.639110 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16c28c0f-9310-4721-87cf-2d1bb88b5bba-scripts\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.639131 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/16c28c0f-9310-4721-87cf-2d1bb88b5bba-var-run\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.639153 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsr9m\" (UniqueName: \"kubernetes.io/projected/16c28c0f-9310-4721-87cf-2d1bb88b5bba-kube-api-access-hsr9m\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.639415 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/16c28c0f-9310-4721-87cf-2d1bb88b5bba-ovn-controller-tls-certs\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.743166 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6f356df8-0955-46c4-9166-2c1eef982399-var-run\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.743251 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/16c28c0f-9310-4721-87cf-2d1bb88b5bba-ovn-controller-tls-certs\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.743800 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6f356df8-0955-46c4-9166-2c1eef982399-scripts\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.743922 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/16c28c0f-9310-4721-87cf-2d1bb88b5bba-var-run-ovn\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.744151 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/16c28c0f-9310-4721-87cf-2d1bb88b5bba-var-log-ovn\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.744203 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6f356df8-0955-46c4-9166-2c1eef982399-etc-ovs\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.744224 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7d8w\" (UniqueName: \"kubernetes.io/projected/6f356df8-0955-46c4-9166-2c1eef982399-kube-api-access-p7d8w\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.744254 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c28c0f-9310-4721-87cf-2d1bb88b5bba-combined-ca-bundle\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.744280 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6f356df8-0955-46c4-9166-2c1eef982399-var-lib\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.744542 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6f356df8-0955-46c4-9166-2c1eef982399-var-log\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.744584 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16c28c0f-9310-4721-87cf-2d1bb88b5bba-scripts\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.744609 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/16c28c0f-9310-4721-87cf-2d1bb88b5bba-var-run\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.744752 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsr9m\" (UniqueName: \"kubernetes.io/projected/16c28c0f-9310-4721-87cf-2d1bb88b5bba-kube-api-access-hsr9m\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.752142 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/16c28c0f-9310-4721-87cf-2d1bb88b5bba-var-run\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.752569 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/16c28c0f-9310-4721-87cf-2d1bb88b5bba-var-log-ovn\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.758134 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/16c28c0f-9310-4721-87cf-2d1bb88b5bba-var-run-ovn\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.784238 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16c28c0f-9310-4721-87cf-2d1bb88b5bba-scripts\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.791668 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsr9m\" (UniqueName: \"kubernetes.io/projected/16c28c0f-9310-4721-87cf-2d1bb88b5bba-kube-api-access-hsr9m\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.807694 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-796d588566-h9wcn"] Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.814344 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/16c28c0f-9310-4721-87cf-2d1bb88b5bba-ovn-controller-tls-certs\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.821106 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c28c0f-9310-4721-87cf-2d1bb88b5bba-combined-ca-bundle\") pod \"ovn-controller-7lpqj\" (UID: \"16c28c0f-9310-4721-87cf-2d1bb88b5bba\") " pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.846668 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6f356df8-0955-46c4-9166-2c1eef982399-etc-ovs\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.849595 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7d8w\" (UniqueName: \"kubernetes.io/projected/6f356df8-0955-46c4-9166-2c1eef982399-kube-api-access-p7d8w\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.850063 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6f356df8-0955-46c4-9166-2c1eef982399-var-lib\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.850594 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6f356df8-0955-46c4-9166-2c1eef982399-var-log\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.850781 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6f356df8-0955-46c4-9166-2c1eef982399-var-run\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.847463 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/6f356df8-0955-46c4-9166-2c1eef982399-etc-ovs\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.850994 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6f356df8-0955-46c4-9166-2c1eef982399-scripts\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.851098 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6f356df8-0955-46c4-9166-2c1eef982399-var-run\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.851191 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/6f356df8-0955-46c4-9166-2c1eef982399-var-log\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.852102 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/6f356df8-0955-46c4-9166-2c1eef982399-var-lib\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.855311 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6f356df8-0955-46c4-9166-2c1eef982399-scripts\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.871322 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7d8w\" (UniqueName: \"kubernetes.io/projected/6f356df8-0955-46c4-9166-2c1eef982399-kube-api-access-p7d8w\") pod \"ovn-controller-ovs-dznst\" (UID: \"6f356df8-0955-46c4-9166-2c1eef982399\") " pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.880803 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7lpqj" Feb 14 04:28:56 crc kubenswrapper[4867]: I0214 04:28:56.929230 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.344369 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.346983 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.352389 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.352770 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.352965 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.353027 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-fqcln" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.353435 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.355141 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.362145 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-492b9" event={"ID":"701367b7-aef6-43b5-a0f9-3a91206962de","Type":"ContainerStarted","Data":"297be4f2c2d05398602a7a56cc65b22059095b88f73a6839ea02bb1fb7fdd68b"} Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.473317 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-63ae6954-6a1d-48f9-b6a7-ee0e266f72bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-63ae6954-6a1d-48f9-b6a7-ee0e266f72bb\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.473414 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/353b0cad-bb6a-4a68-b787-64fb7b32ee27-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.473439 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/353b0cad-bb6a-4a68-b787-64fb7b32ee27-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.474938 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/353b0cad-bb6a-4a68-b787-64fb7b32ee27-config\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.474968 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/353b0cad-bb6a-4a68-b787-64fb7b32ee27-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.474991 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcrqc\" (UniqueName: \"kubernetes.io/projected/353b0cad-bb6a-4a68-b787-64fb7b32ee27-kube-api-access-wcrqc\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.475027 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353b0cad-bb6a-4a68-b787-64fb7b32ee27-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.475134 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/353b0cad-bb6a-4a68-b787-64fb7b32ee27-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.577568 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353b0cad-bb6a-4a68-b787-64fb7b32ee27-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.577697 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/353b0cad-bb6a-4a68-b787-64fb7b32ee27-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.577790 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-63ae6954-6a1d-48f9-b6a7-ee0e266f72bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-63ae6954-6a1d-48f9-b6a7-ee0e266f72bb\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.577858 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/353b0cad-bb6a-4a68-b787-64fb7b32ee27-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.577875 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/353b0cad-bb6a-4a68-b787-64fb7b32ee27-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.579018 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/353b0cad-bb6a-4a68-b787-64fb7b32ee27-config\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.579690 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/353b0cad-bb6a-4a68-b787-64fb7b32ee27-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.579725 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcrqc\" (UniqueName: \"kubernetes.io/projected/353b0cad-bb6a-4a68-b787-64fb7b32ee27-kube-api-access-wcrqc\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.580214 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/353b0cad-bb6a-4a68-b787-64fb7b32ee27-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.581264 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/353b0cad-bb6a-4a68-b787-64fb7b32ee27-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.584889 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/353b0cad-bb6a-4a68-b787-64fb7b32ee27-config\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.585169 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/353b0cad-bb6a-4a68-b787-64fb7b32ee27-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.586847 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/353b0cad-bb6a-4a68-b787-64fb7b32ee27-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.586871 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.586979 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-63ae6954-6a1d-48f9-b6a7-ee0e266f72bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-63ae6954-6a1d-48f9-b6a7-ee0e266f72bb\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2791e6f04a407c8a08ed17014ba6b90fc1c1aed99508ca220d2fd83daa6b717c/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.594043 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/353b0cad-bb6a-4a68-b787-64fb7b32ee27-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.596446 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcrqc\" (UniqueName: \"kubernetes.io/projected/353b0cad-bb6a-4a68-b787-64fb7b32ee27-kube-api-access-wcrqc\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.665841 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-63ae6954-6a1d-48f9-b6a7-ee0e266f72bb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-63ae6954-6a1d-48f9-b6a7-ee0e266f72bb\") pod \"ovsdbserver-nb-0\" (UID: \"353b0cad-bb6a-4a68-b787-64fb7b32ee27\") " pod="openstack/ovsdbserver-nb-0" Feb 14 04:28:57 crc kubenswrapper[4867]: I0214 04:28:57.690100 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.461085 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.466531 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.470350 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.470429 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.470658 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.470755 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-w4792" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.500960 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.586284 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9faf0052-6200-4ac5-9216-7a26a29f4508-config\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.586323 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9faf0052-6200-4ac5-9216-7a26a29f4508-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.586348 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9faf0052-6200-4ac5-9216-7a26a29f4508-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.586378 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9faf0052-6200-4ac5-9216-7a26a29f4508-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.586418 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b1340507-39b4-4147-a2fe-5c4d09e854ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1340507-39b4-4147-a2fe-5c4d09e854ad\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.586817 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qw78\" (UniqueName: \"kubernetes.io/projected/9faf0052-6200-4ac5-9216-7a26a29f4508-kube-api-access-4qw78\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.586907 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9faf0052-6200-4ac5-9216-7a26a29f4508-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.586952 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9faf0052-6200-4ac5-9216-7a26a29f4508-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.689369 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9faf0052-6200-4ac5-9216-7a26a29f4508-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.689440 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9faf0052-6200-4ac5-9216-7a26a29f4508-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.689539 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b1340507-39b4-4147-a2fe-5c4d09e854ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1340507-39b4-4147-a2fe-5c4d09e854ad\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.689748 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qw78\" (UniqueName: \"kubernetes.io/projected/9faf0052-6200-4ac5-9216-7a26a29f4508-kube-api-access-4qw78\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.690268 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9faf0052-6200-4ac5-9216-7a26a29f4508-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.690316 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9faf0052-6200-4ac5-9216-7a26a29f4508-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.691120 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9faf0052-6200-4ac5-9216-7a26a29f4508-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.691210 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9faf0052-6200-4ac5-9216-7a26a29f4508-config\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.691237 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9faf0052-6200-4ac5-9216-7a26a29f4508-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.692389 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9faf0052-6200-4ac5-9216-7a26a29f4508-config\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.692645 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9faf0052-6200-4ac5-9216-7a26a29f4508-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.693363 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.693402 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b1340507-39b4-4147-a2fe-5c4d09e854ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1340507-39b4-4147-a2fe-5c4d09e854ad\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0489361f098bfb09ef2865530d497df974905e0ea95999431299d200f73e3b92/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.697484 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9faf0052-6200-4ac5-9216-7a26a29f4508-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.697534 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9faf0052-6200-4ac5-9216-7a26a29f4508-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.697944 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9faf0052-6200-4ac5-9216-7a26a29f4508-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.709479 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qw78\" (UniqueName: \"kubernetes.io/projected/9faf0052-6200-4ac5-9216-7a26a29f4508-kube-api-access-4qw78\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:00 crc kubenswrapper[4867]: I0214 04:29:00.812937 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b1340507-39b4-4147-a2fe-5c4d09e854ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1340507-39b4-4147-a2fe-5c4d09e854ad\") pod \"ovsdbserver-sb-0\" (UID: \"9faf0052-6200-4ac5-9216-7a26a29f4508\") " pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:01 crc kubenswrapper[4867]: I0214 04:29:01.101152 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:01 crc kubenswrapper[4867]: I0214 04:29:01.255347 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:29:01 crc kubenswrapper[4867]: I0214 04:29:01.255764 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:29:01 crc kubenswrapper[4867]: I0214 04:29:01.255822 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:29:01 crc kubenswrapper[4867]: I0214 04:29:01.256769 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a6dbe719cdc073fcc8481a2727f00815982a8bd61b2cd10d4229a11b7b5cb46c"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 04:29:01 crc kubenswrapper[4867]: I0214 04:29:01.256823 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://a6dbe719cdc073fcc8481a2727f00815982a8bd61b2cd10d4229a11b7b5cb46c" gracePeriod=600 Feb 14 04:29:02 crc kubenswrapper[4867]: I0214 04:29:02.470711 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="a6dbe719cdc073fcc8481a2727f00815982a8bd61b2cd10d4229a11b7b5cb46c" exitCode=0 Feb 14 04:29:02 crc kubenswrapper[4867]: I0214 04:29:02.470763 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"a6dbe719cdc073fcc8481a2727f00815982a8bd61b2cd10d4229a11b7b5cb46c"} Feb 14 04:29:02 crc kubenswrapper[4867]: I0214 04:29:02.470804 4867 scope.go:117] "RemoveContainer" containerID="3ce87267e4cadbd1bac903bbe9da7eec07159552420bcd52dda15fc535f1ace5" Feb 14 04:29:07 crc kubenswrapper[4867]: W0214 04:29:07.577632 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda78fec22_f395_42fc_a228_8d896580bc95.slice/crio-7872a307f41dac436f282982837819d0b6f5a19b6e81efabef32ab85041cfe4d WatchSource:0}: Error finding container 7872a307f41dac436f282982837819d0b6f5a19b6e81efabef32ab85041cfe4d: Status 404 returned error can't find the container with id 7872a307f41dac436f282982837819d0b6f5a19b6e81efabef32ab85041cfe4d Feb 14 04:29:08 crc kubenswrapper[4867]: I0214 04:29:08.544333 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a78fec22-f395-42fc-a228-8d896580bc95","Type":"ContainerStarted","Data":"7872a307f41dac436f282982837819d0b6f5a19b6e81efabef32ab85041cfe4d"} Feb 14 04:29:08 crc kubenswrapper[4867]: I0214 04:29:08.545540 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-796d588566-h9wcn" event={"ID":"41d35864-bb64-45f3-bc1e-a7d5440c35ad","Type":"ContainerStarted","Data":"35c6aea9ac553c5348e6649237f624ad7062d04a2f6e1250a646ada88b211005"} Feb 14 04:29:21 crc kubenswrapper[4867]: E0214 04:29:21.438896 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 14 04:29:21 crc kubenswrapper[4867]: E0214 04:29:21.439703 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24d97,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(b27199a8-11ac-4e59-90b8-b42387dd6dd2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:29:21 crc kubenswrapper[4867]: E0214 04:29:21.440989 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="b27199a8-11ac-4e59-90b8-b42387dd6dd2" Feb 14 04:29:21 crc kubenswrapper[4867]: E0214 04:29:21.673359 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="b27199a8-11ac-4e59-90b8-b42387dd6dd2" Feb 14 04:29:21 crc kubenswrapper[4867]: E0214 04:29:21.893331 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f" Feb 14 04:29:21 crc kubenswrapper[4867]: E0214 04:29:21.893501 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:observability-ui-dashboards,Image:registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f,Command:[],Args:[-port=9443 -cert=/var/serving-cert/tls.crt -key=/var/serving-cert/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serving-cert,ReadOnly:true,MountPath:/var/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kf2rw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod observability-ui-dashboards-66cbf594b5-492b9_openshift-operators(701367b7-aef6-43b5-a0f9-3a91206962de): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 14 04:29:21 crc kubenswrapper[4867]: E0214 04:29:21.894985 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"observability-ui-dashboards\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-492b9" podUID="701367b7-aef6-43b5-a0f9-3a91206962de" Feb 14 04:29:22 crc kubenswrapper[4867]: E0214 04:29:22.685739 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"observability-ui-dashboards\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:093d2731ac848ed5fd57356b155a19d3bf7b8db96d95b09c5d0095e143f7254f\\\"\"" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-492b9" podUID="701367b7-aef6-43b5-a0f9-3a91206962de" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.310856 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.311278 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-294tk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-1_openstack(6bc83863-74f4-4509-969c-0f3305a542a8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.312475 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-1" podUID="6bc83863-74f4-4509-969c-0f3305a542a8" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.321374 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.321478 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q676p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-2_openstack(9bba5174-edd6-4e59-8b84-6c50439be88e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.321804 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.322073 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7sm7m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(505de461-9e6f-4914-bf50-e2bf4149b566): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.322916 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-2" podUID="9bba5174-edd6-4e59-8b84-6c50439be88e" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.324029 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="505de461-9e6f-4914-bf50-e2bf4149b566" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.364824 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.365325 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6kp9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(647ba30a-5526-4e27-9095-680c31ff4eb3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.366680 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="647ba30a-5526-4e27-9095-680c31ff4eb3" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.381769 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.382410 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrf6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(e1e022d9-e2db-41eb-bbc8-36a85211a141): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.383746 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="e1e022d9-e2db-41eb-bbc8-36a85211a141" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.709984 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="505de461-9e6f-4914-bf50-e2bf4149b566" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.710255 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="647ba30a-5526-4e27-9095-680c31ff4eb3" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.710303 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-1" podUID="6bc83863-74f4-4509-969c-0f3305a542a8" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.710341 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-2" podUID="9bba5174-edd6-4e59-8b84-6c50439be88e" Feb 14 04:29:24 crc kubenswrapper[4867]: E0214 04:29:24.710378 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="e1e022d9-e2db-41eb-bbc8-36a85211a141" Feb 14 04:29:24 crc kubenswrapper[4867]: I0214 04:29:24.835620 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7lpqj"] Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.494817 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.495285 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4khb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-z692n_openstack(6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.496937 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-z692n" podUID="6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3" Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.504780 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.504972 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-959cx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-s87hs_openstack(1e9ddba3-128d-4025-9661-b07c5e1e9329): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.506279 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-s87hs" podUID="1e9ddba3-128d-4025-9661-b07c5e1e9329" Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.514560 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.514747 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xj27v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-hxkz7_openstack(a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.516036 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" podUID="a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c" Feb 14 04:29:25 crc kubenswrapper[4867]: W0214 04:29:25.516999 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16c28c0f_9310_4721_87cf_2d1bb88b5bba.slice/crio-4822560c390676a0713ab62f4d8795270c0dda5e0fda188c00b5e4cfe5130c2a WatchSource:0}: Error finding container 4822560c390676a0713ab62f4d8795270c0dda5e0fda188c00b5e4cfe5130c2a: Status 404 returned error can't find the container with id 4822560c390676a0713ab62f4d8795270c0dda5e0fda188c00b5e4cfe5130c2a Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.523963 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.524194 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7j8f4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-lbzlt_openstack(dbe41be0-f7f8-47ff-a587-b85e282fa5ee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.525680 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" podUID="dbe41be0-f7f8-47ff-a587-b85e282fa5ee" Feb 14 04:29:25 crc kubenswrapper[4867]: I0214 04:29:25.739840 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7lpqj" event={"ID":"16c28c0f-9310-4721-87cf-2d1bb88b5bba","Type":"ContainerStarted","Data":"4822560c390676a0713ab62f4d8795270c0dda5e0fda188c00b5e4cfe5130c2a"} Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.750097 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" podUID="dbe41be0-f7f8-47ff-a587-b85e282fa5ee" Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.750487 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" podUID="a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c" Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.973270 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.973572 4867 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.973709 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h5zbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(a78fec22-f395-42fc-a228-8d896580bc95): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" logger="UnhandledError" Feb 14 04:29:25 crc kubenswrapper[4867]: E0214 04:29:25.975091 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="a78fec22-f395-42fc-a228-8d896580bc95" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.372297 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-z692n" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.384675 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-s87hs" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.486777 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-959cx\" (UniqueName: \"kubernetes.io/projected/1e9ddba3-128d-4025-9661-b07c5e1e9329-kube-api-access-959cx\") pod \"1e9ddba3-128d-4025-9661-b07c5e1e9329\" (UID: \"1e9ddba3-128d-4025-9661-b07c5e1e9329\") " Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.487086 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-dns-svc\") pod \"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3\" (UID: \"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3\") " Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.487154 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e9ddba3-128d-4025-9661-b07c5e1e9329-config\") pod \"1e9ddba3-128d-4025-9661-b07c5e1e9329\" (UID: \"1e9ddba3-128d-4025-9661-b07c5e1e9329\") " Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.487365 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-config\") pod \"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3\" (UID: \"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3\") " Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.487407 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4khb7\" (UniqueName: \"kubernetes.io/projected/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-kube-api-access-4khb7\") pod \"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3\" (UID: \"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3\") " Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.487878 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3" (UID: "6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.488260 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e9ddba3-128d-4025-9661-b07c5e1e9329-config" (OuterVolumeSpecName: "config") pod "1e9ddba3-128d-4025-9661-b07c5e1e9329" (UID: "1e9ddba3-128d-4025-9661-b07c5e1e9329"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.488574 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-config" (OuterVolumeSpecName: "config") pod "6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3" (UID: "6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.494392 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-kube-api-access-4khb7" (OuterVolumeSpecName: "kube-api-access-4khb7") pod "6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3" (UID: "6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3"). InnerVolumeSpecName "kube-api-access-4khb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.495412 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e9ddba3-128d-4025-9661-b07c5e1e9329-kube-api-access-959cx" (OuterVolumeSpecName: "kube-api-access-959cx") pod "1e9ddba3-128d-4025-9661-b07c5e1e9329" (UID: "1e9ddba3-128d-4025-9661-b07c5e1e9329"). InnerVolumeSpecName "kube-api-access-959cx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.499146 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.590585 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.590636 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4khb7\" (UniqueName: \"kubernetes.io/projected/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-kube-api-access-4khb7\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.590652 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-959cx\" (UniqueName: \"kubernetes.io/projected/1e9ddba3-128d-4025-9661-b07c5e1e9329-kube-api-access-959cx\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.590661 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.590674 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e9ddba3-128d-4025-9661-b07c5e1e9329-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.646640 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 14 04:29:26 crc kubenswrapper[4867]: W0214 04:29:26.663077 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod353b0cad_bb6a_4a68_b787_64fb7b32ee27.slice/crio-9bdefce622b59860174ddd872b95e01cd574127f0ad95423e2c0fcb3f2154c58 WatchSource:0}: Error finding container 9bdefce622b59860174ddd872b95e01cd574127f0ad95423e2c0fcb3f2154c58: Status 404 returned error can't find the container with id 9bdefce622b59860174ddd872b95e01cd574127f0ad95423e2c0fcb3f2154c58 Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.758053 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-796d588566-h9wcn" event={"ID":"41d35864-bb64-45f3-bc1e-a7d5440c35ad","Type":"ContainerStarted","Data":"89de8ce8d39e362c2e7511282186708dca422f34e03beb9fef455082b5740e5a"} Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.761886 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-z692n" event={"ID":"6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3","Type":"ContainerDied","Data":"6f6ab74e4cce4dfd48bee7b02d98ab6f158452369ea200d7d42632fd7db4659d"} Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.762051 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-z692n" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.767671 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c755009c-2bb6-4f8f-9b53-460a0e4c9447","Type":"ContainerStarted","Data":"bf0605b193983ab03177306fae17d696c18a8e3789f84b06d5ef6b3d006f8d77"} Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.769318 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-s87hs" event={"ID":"1e9ddba3-128d-4025-9661-b07c5e1e9329","Type":"ContainerDied","Data":"37ef45b02f432cc3a119a40a28ecba25608ab0780e37295f9063ae35dc630718"} Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.769463 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-s87hs" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.781691 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"353b0cad-bb6a-4a68-b787-64fb7b32ee27","Type":"ContainerStarted","Data":"9bdefce622b59860174ddd872b95e01cd574127f0ad95423e2c0fcb3f2154c58"} Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.808240 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"9c4b967cf6b24751f9f07fc3f33e355390aef9adbb8efd8f22637fd0bfe6c0be"} Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.815744 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"f1d6dceb-5ee5-407d-ade4-be35d128d8dc","Type":"ContainerStarted","Data":"cc7200bfcb007faa77f39190304eeb096c5f0018fd1bda42f79e7843d5cad132"} Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.815866 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 14 04:29:26 crc kubenswrapper[4867]: E0214 04:29:26.817141 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="a78fec22-f395-42fc-a228-8d896580bc95" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.836305 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-796d588566-h9wcn" podStartSLOduration=31.836282117 podStartE2EDuration="31.836282117s" podCreationTimestamp="2026-02-14 04:28:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:29:26.785125726 +0000 UTC m=+1198.866063050" watchObservedRunningTime="2026-02-14 04:29:26.836282117 +0000 UTC m=+1198.917219431" Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.851935 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-s87hs"] Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.862908 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-s87hs"] Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.903757 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-z692n"] Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.911133 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-z692n"] Feb 14 04:29:26 crc kubenswrapper[4867]: I0214 04:29:26.936670 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=4.129870626 podStartE2EDuration="36.936643537s" podCreationTimestamp="2026-02-14 04:28:50 +0000 UTC" firstStartedPulling="2026-02-14 04:28:52.88283099 +0000 UTC m=+1164.963768304" lastFinishedPulling="2026-02-14 04:29:25.689603901 +0000 UTC m=+1197.770541215" observedRunningTime="2026-02-14 04:29:26.921718449 +0000 UTC m=+1199.002655763" watchObservedRunningTime="2026-02-14 04:29:26.936643537 +0000 UTC m=+1199.017580851" Feb 14 04:29:27 crc kubenswrapper[4867]: I0214 04:29:27.013915 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e9ddba3-128d-4025-9661-b07c5e1e9329" path="/var/lib/kubelet/pods/1e9ddba3-128d-4025-9661-b07c5e1e9329/volumes" Feb 14 04:29:27 crc kubenswrapper[4867]: I0214 04:29:27.014336 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3" path="/var/lib/kubelet/pods/6a87cc0d-e74a-4be2-9ac2-7f9d565f34e3/volumes" Feb 14 04:29:27 crc kubenswrapper[4867]: I0214 04:29:27.275820 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 14 04:29:27 crc kubenswrapper[4867]: I0214 04:29:27.714447 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-dznst"] Feb 14 04:29:27 crc kubenswrapper[4867]: I0214 04:29:27.826561 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"9faf0052-6200-4ac5-9216-7a26a29f4508","Type":"ContainerStarted","Data":"7c55bc7f1bc686894b3f509f6c90a39778393c04ab65d3e679be60bc6d5ef550"} Feb 14 04:29:29 crc kubenswrapper[4867]: W0214 04:29:29.403221 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f356df8_0955_46c4_9166_2c1eef982399.slice/crio-0805180949cae57ae21cd331fb9b565e19d17c17b10cb5bf5debc23283a6cf71 WatchSource:0}: Error finding container 0805180949cae57ae21cd331fb9b565e19d17c17b10cb5bf5debc23283a6cf71: Status 404 returned error can't find the container with id 0805180949cae57ae21cd331fb9b565e19d17c17b10cb5bf5debc23283a6cf71 Feb 14 04:29:29 crc kubenswrapper[4867]: I0214 04:29:29.852014 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dznst" event={"ID":"6f356df8-0955-46c4-9166-2c1eef982399","Type":"ContainerStarted","Data":"0805180949cae57ae21cd331fb9b565e19d17c17b10cb5bf5debc23283a6cf71"} Feb 14 04:29:31 crc kubenswrapper[4867]: I0214 04:29:31.169962 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 14 04:29:31 crc kubenswrapper[4867]: I0214 04:29:31.880533 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7lpqj" event={"ID":"16c28c0f-9310-4721-87cf-2d1bb88b5bba","Type":"ContainerStarted","Data":"024c92a0d3dd82c2ce5e4b6e61d011efe3cd5c6541f2cd352e0ab0c7a014be5b"} Feb 14 04:29:31 crc kubenswrapper[4867]: I0214 04:29:31.881004 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-7lpqj" Feb 14 04:29:31 crc kubenswrapper[4867]: I0214 04:29:31.882705 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"9faf0052-6200-4ac5-9216-7a26a29f4508","Type":"ContainerStarted","Data":"5b651ce16dda3789a2359fd3ea8f6a35daeb189d608b4f490fddfa341b8ae70d"} Feb 14 04:29:31 crc kubenswrapper[4867]: I0214 04:29:31.886915 4867 generic.go:334] "Generic (PLEG): container finished" podID="6f356df8-0955-46c4-9166-2c1eef982399" containerID="51d0f239c29026a75fd0385ee45a13f98a3f630daa99fbd65c626a238e95f520" exitCode=0 Feb 14 04:29:31 crc kubenswrapper[4867]: I0214 04:29:31.887135 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dznst" event={"ID":"6f356df8-0955-46c4-9166-2c1eef982399","Type":"ContainerDied","Data":"51d0f239c29026a75fd0385ee45a13f98a3f630daa99fbd65c626a238e95f520"} Feb 14 04:29:31 crc kubenswrapper[4867]: I0214 04:29:31.893431 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"353b0cad-bb6a-4a68-b787-64fb7b32ee27","Type":"ContainerStarted","Data":"9c75821820c4bca9c1e46189f48b8c3613810190444cd4b1ab130fbd23a5988b"} Feb 14 04:29:31 crc kubenswrapper[4867]: I0214 04:29:31.910950 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-7lpqj" podStartSLOduration=30.786869513 podStartE2EDuration="35.91091897s" podCreationTimestamp="2026-02-14 04:28:56 +0000 UTC" firstStartedPulling="2026-02-14 04:29:25.51995734 +0000 UTC m=+1197.600894654" lastFinishedPulling="2026-02-14 04:29:30.644006797 +0000 UTC m=+1202.724944111" observedRunningTime="2026-02-14 04:29:31.898912377 +0000 UTC m=+1203.979849701" watchObservedRunningTime="2026-02-14 04:29:31.91091897 +0000 UTC m=+1203.991856284" Feb 14 04:29:32 crc kubenswrapper[4867]: I0214 04:29:32.904194 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dznst" event={"ID":"6f356df8-0955-46c4-9166-2c1eef982399","Type":"ContainerStarted","Data":"42e8d69fc5fa2650c4797b1653adebe3254082cae9e55c7ebdc44341f489e759"} Feb 14 04:29:33 crc kubenswrapper[4867]: I0214 04:29:33.914918 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"9faf0052-6200-4ac5-9216-7a26a29f4508","Type":"ContainerStarted","Data":"cc1860ae628ffb43275f44883ae0b3aefc69b7d7264ec66a15337b6960dc2076"} Feb 14 04:29:33 crc kubenswrapper[4867]: I0214 04:29:33.918679 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-dznst" event={"ID":"6f356df8-0955-46c4-9166-2c1eef982399","Type":"ContainerStarted","Data":"aa174664422328dad834c3062854d4b324ab232201f0b150013b14519d1c38f7"} Feb 14 04:29:33 crc kubenswrapper[4867]: I0214 04:29:33.919848 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:29:33 crc kubenswrapper[4867]: I0214 04:29:33.919892 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:29:33 crc kubenswrapper[4867]: I0214 04:29:33.922851 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"353b0cad-bb6a-4a68-b787-64fb7b32ee27","Type":"ContainerStarted","Data":"967e4a50c913d0ec337bb8ac1a062e340511c61d9594b56fe9c5a8e8fc49544f"} Feb 14 04:29:33 crc kubenswrapper[4867]: I0214 04:29:33.942905 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=28.775696911 podStartE2EDuration="34.942887231s" podCreationTimestamp="2026-02-14 04:28:59 +0000 UTC" firstStartedPulling="2026-02-14 04:29:27.308453888 +0000 UTC m=+1199.389391202" lastFinishedPulling="2026-02-14 04:29:33.475644208 +0000 UTC m=+1205.556581522" observedRunningTime="2026-02-14 04:29:33.936125375 +0000 UTC m=+1206.017062719" watchObservedRunningTime="2026-02-14 04:29:33.942887231 +0000 UTC m=+1206.023824545" Feb 14 04:29:33 crc kubenswrapper[4867]: I0214 04:29:33.963195 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-dznst" podStartSLOduration=36.587511448 podStartE2EDuration="37.963163609s" podCreationTimestamp="2026-02-14 04:28:56 +0000 UTC" firstStartedPulling="2026-02-14 04:29:29.407386362 +0000 UTC m=+1201.488323676" lastFinishedPulling="2026-02-14 04:29:30.783038523 +0000 UTC m=+1202.863975837" observedRunningTime="2026-02-14 04:29:33.954135704 +0000 UTC m=+1206.035073028" watchObservedRunningTime="2026-02-14 04:29:33.963163609 +0000 UTC m=+1206.044100923" Feb 14 04:29:33 crc kubenswrapper[4867]: I0214 04:29:33.976735 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=31.099212155 podStartE2EDuration="37.976708071s" podCreationTimestamp="2026-02-14 04:28:56 +0000 UTC" firstStartedPulling="2026-02-14 04:29:26.665988107 +0000 UTC m=+1198.746925421" lastFinishedPulling="2026-02-14 04:29:33.543484023 +0000 UTC m=+1205.624421337" observedRunningTime="2026-02-14 04:29:33.970763596 +0000 UTC m=+1206.051700920" watchObservedRunningTime="2026-02-14 04:29:33.976708071 +0000 UTC m=+1206.057645385" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.102014 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.164976 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.184881 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lbzlt"] Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.253997 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-cl29c"] Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.255927 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.295997 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-cl29c"] Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.379045 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa85f647-f104-47eb-800c-5926241431c6-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-cl29c\" (UID: \"fa85f647-f104-47eb-800c-5926241431c6\") " pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.379677 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa85f647-f104-47eb-800c-5926241431c6-config\") pod \"dnsmasq-dns-7cb5889db5-cl29c\" (UID: \"fa85f647-f104-47eb-800c-5926241431c6\") " pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.379859 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zs8x\" (UniqueName: \"kubernetes.io/projected/fa85f647-f104-47eb-800c-5926241431c6-kube-api-access-8zs8x\") pod \"dnsmasq-dns-7cb5889db5-cl29c\" (UID: \"fa85f647-f104-47eb-800c-5926241431c6\") " pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.482891 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa85f647-f104-47eb-800c-5926241431c6-config\") pod \"dnsmasq-dns-7cb5889db5-cl29c\" (UID: \"fa85f647-f104-47eb-800c-5926241431c6\") " pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.483162 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zs8x\" (UniqueName: \"kubernetes.io/projected/fa85f647-f104-47eb-800c-5926241431c6-kube-api-access-8zs8x\") pod \"dnsmasq-dns-7cb5889db5-cl29c\" (UID: \"fa85f647-f104-47eb-800c-5926241431c6\") " pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.483302 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa85f647-f104-47eb-800c-5926241431c6-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-cl29c\" (UID: \"fa85f647-f104-47eb-800c-5926241431c6\") " pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.484448 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa85f647-f104-47eb-800c-5926241431c6-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-cl29c\" (UID: \"fa85f647-f104-47eb-800c-5926241431c6\") " pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.485057 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa85f647-f104-47eb-800c-5926241431c6-config\") pod \"dnsmasq-dns-7cb5889db5-cl29c\" (UID: \"fa85f647-f104-47eb-800c-5926241431c6\") " pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.528395 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zs8x\" (UniqueName: \"kubernetes.io/projected/fa85f647-f104-47eb-800c-5926241431c6-kube-api-access-8zs8x\") pod \"dnsmasq-dns-7cb5889db5-cl29c\" (UID: \"fa85f647-f104-47eb-800c-5926241431c6\") " pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.604829 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.765107 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.891774 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-config\") pod \"dbe41be0-f7f8-47ff-a587-b85e282fa5ee\" (UID: \"dbe41be0-f7f8-47ff-a587-b85e282fa5ee\") " Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.892298 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j8f4\" (UniqueName: \"kubernetes.io/projected/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-kube-api-access-7j8f4\") pod \"dbe41be0-f7f8-47ff-a587-b85e282fa5ee\" (UID: \"dbe41be0-f7f8-47ff-a587-b85e282fa5ee\") " Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.892374 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-dns-svc\") pod \"dbe41be0-f7f8-47ff-a587-b85e282fa5ee\" (UID: \"dbe41be0-f7f8-47ff-a587-b85e282fa5ee\") " Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.893722 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dbe41be0-f7f8-47ff-a587-b85e282fa5ee" (UID: "dbe41be0-f7f8-47ff-a587-b85e282fa5ee"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.894470 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-config" (OuterVolumeSpecName: "config") pod "dbe41be0-f7f8-47ff-a587-b85e282fa5ee" (UID: "dbe41be0-f7f8-47ff-a587-b85e282fa5ee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.902735 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-kube-api-access-7j8f4" (OuterVolumeSpecName: "kube-api-access-7j8f4") pod "dbe41be0-f7f8-47ff-a587-b85e282fa5ee" (UID: "dbe41be0-f7f8-47ff-a587-b85e282fa5ee"). InnerVolumeSpecName "kube-api-access-7j8f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:29:34 crc kubenswrapper[4867]: I0214 04:29:34.972126 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c755009c-2bb6-4f8f-9b53-460a0e4c9447","Type":"ContainerStarted","Data":"a1fd36c74b9a00850c975f49583fd6e7537b5b3ab16d29f2ed2f5ae6fb4437b4"} Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.004414 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.004442 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j8f4\" (UniqueName: \"kubernetes.io/projected/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-kube-api-access-7j8f4\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.004452 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dbe41be0-f7f8-47ff-a587-b85e282fa5ee-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.017659 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b27199a8-11ac-4e59-90b8-b42387dd6dd2","Type":"ContainerStarted","Data":"88cb930154e07e378cec2e1f6e9deef9c47de4c5b43c2284262de9eb71194722"} Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.024135 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" event={"ID":"dbe41be0-f7f8-47ff-a587-b85e282fa5ee","Type":"ContainerDied","Data":"e2cde2f1b32b51340fd763b4ceea3517b72f2d3aa4a0cc5b4b3855a816cd999b"} Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.024249 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-lbzlt" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.024442 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.182702 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lbzlt"] Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.206587 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lbzlt"] Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.338783 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-cl29c"] Feb 14 04:29:35 crc kubenswrapper[4867]: W0214 04:29:35.339013 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa85f647_f104_47eb_800c_5926241431c6.slice/crio-b689e14869d0b7bebda2bfe1f81a3f0324cf2d9cbabff503414d1c60e7a92163 WatchSource:0}: Error finding container b689e14869d0b7bebda2bfe1f81a3f0324cf2d9cbabff503414d1c60e7a92163: Status 404 returned error can't find the container with id b689e14869d0b7bebda2bfe1f81a3f0324cf2d9cbabff503414d1c60e7a92163 Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.584368 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.590416 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.592119 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-nmjhj" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.592119 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.592457 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.594081 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.614420 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.686151 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.686208 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.691991 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.723858 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.724178 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1d9f9909-1442-4d83-b2aa-0f58d4022338-cache\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.724237 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v8rn\" (UniqueName: \"kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-kube-api-access-4v8rn\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.724270 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-319b99f7-9436-4c11-9b1c-dc8e7768f04e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-319b99f7-9436-4c11-9b1c-dc8e7768f04e\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.724319 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d9f9909-1442-4d83-b2aa-0f58d4022338-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.724370 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1d9f9909-1442-4d83-b2aa-0f58d4022338-lock\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.827085 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d9f9909-1442-4d83-b2aa-0f58d4022338-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.827232 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1d9f9909-1442-4d83-b2aa-0f58d4022338-lock\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.827341 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.827383 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1d9f9909-1442-4d83-b2aa-0f58d4022338-cache\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.827462 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v8rn\" (UniqueName: \"kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-kube-api-access-4v8rn\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.827495 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-319b99f7-9436-4c11-9b1c-dc8e7768f04e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-319b99f7-9436-4c11-9b1c-dc8e7768f04e\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.828394 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/1d9f9909-1442-4d83-b2aa-0f58d4022338-lock\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: E0214 04:29:35.828467 4867 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 14 04:29:35 crc kubenswrapper[4867]: E0214 04:29:35.828896 4867 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 14 04:29:35 crc kubenswrapper[4867]: E0214 04:29:35.829040 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift podName:1d9f9909-1442-4d83-b2aa-0f58d4022338 nodeName:}" failed. No retries permitted until 2026-02-14 04:29:36.32901716 +0000 UTC m=+1208.409954474 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift") pod "swift-storage-0" (UID: "1d9f9909-1442-4d83-b2aa-0f58d4022338") : configmap "swift-ring-files" not found Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.829060 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/1d9f9909-1442-4d83-b2aa-0f58d4022338-cache\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.834521 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.834642 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-319b99f7-9436-4c11-9b1c-dc8e7768f04e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-319b99f7-9436-4c11-9b1c-dc8e7768f04e\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/267768274c449aec6b5b6bd87651d01565bcb26558a88e152a72bbebcd71e6ea/globalmount\"" pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.837613 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d9f9909-1442-4d83-b2aa-0f58d4022338-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.848858 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v8rn\" (UniqueName: \"kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-kube-api-access-4v8rn\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:35 crc kubenswrapper[4867]: I0214 04:29:35.886626 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-319b99f7-9436-4c11-9b1c-dc8e7768f04e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-319b99f7-9436-4c11-9b1c-dc8e7768f04e\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.059630 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" event={"ID":"fa85f647-f104-47eb-800c-5926241431c6","Type":"ContainerStarted","Data":"b689e14869d0b7bebda2bfe1f81a3f0324cf2d9cbabff503414d1c60e7a92163"} Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.077201 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-796d588566-h9wcn" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.171269 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.292236 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-27bx5"] Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.293729 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.296331 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.296375 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.296764 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.315736 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-27bx5"] Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.333780 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6c8864b6b5-mwdd6"] Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.353869 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqpbj\" (UniqueName: \"kubernetes.io/projected/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-kube-api-access-tqpbj\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.353931 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-ring-data-devices\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.353967 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-scripts\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.354016 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-etc-swift\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.354077 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.354120 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-dispersionconf\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.354136 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-swiftconf\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.354173 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-combined-ca-bundle\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: E0214 04:29:36.354378 4867 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 14 04:29:36 crc kubenswrapper[4867]: E0214 04:29:36.354394 4867 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 14 04:29:36 crc kubenswrapper[4867]: E0214 04:29:36.354433 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift podName:1d9f9909-1442-4d83-b2aa-0f58d4022338 nodeName:}" failed. No retries permitted until 2026-02-14 04:29:37.354417805 +0000 UTC m=+1209.435355119 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift") pod "swift-storage-0" (UID: "1d9f9909-1442-4d83-b2aa-0f58d4022338") : configmap "swift-ring-files" not found Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.380687 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-dc8sm"] Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.382757 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.387649 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-27bx5"] Feb 14 04:29:36 crc kubenswrapper[4867]: E0214 04:29:36.388495 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-tqpbj ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-27bx5" podUID="2eb35c23-c6de-46f0-a7bf-8390d9eefd42" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.402294 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dc8sm"] Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.463866 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/92f44db3-78d7-4707-af34-daf9f3bbc0bf-ring-data-devices\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.463958 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqpbj\" (UniqueName: \"kubernetes.io/projected/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-kube-api-access-tqpbj\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.464001 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-ring-data-devices\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.464049 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-scripts\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.464078 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-dispersionconf\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.464124 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d9fk\" (UniqueName: \"kubernetes.io/projected/92f44db3-78d7-4707-af34-daf9f3bbc0bf-kube-api-access-8d9fk\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.464170 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-etc-swift\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.464231 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-swiftconf\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.464267 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-combined-ca-bundle\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.464338 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-dispersionconf\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.464365 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-swiftconf\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.464409 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/92f44db3-78d7-4707-af34-daf9f3bbc0bf-etc-swift\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.464443 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-combined-ca-bundle\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.464465 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92f44db3-78d7-4707-af34-daf9f3bbc0bf-scripts\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.464951 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-etc-swift\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.467641 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-ring-data-devices\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.514249 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-scripts\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.518753 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-dispersionconf\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.519792 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqpbj\" (UniqueName: \"kubernetes.io/projected/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-kube-api-access-tqpbj\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.520936 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-swiftconf\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.522530 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-combined-ca-bundle\") pod \"swift-ring-rebalance-27bx5\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.556012 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hxkz7"] Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.568089 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-dispersionconf\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.568194 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d9fk\" (UniqueName: \"kubernetes.io/projected/92f44db3-78d7-4707-af34-daf9f3bbc0bf-kube-api-access-8d9fk\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.568299 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-swiftconf\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.568328 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-combined-ca-bundle\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.568429 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/92f44db3-78d7-4707-af34-daf9f3bbc0bf-etc-swift\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.568471 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92f44db3-78d7-4707-af34-daf9f3bbc0bf-scripts\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.568554 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/92f44db3-78d7-4707-af34-daf9f3bbc0bf-ring-data-devices\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.569767 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/92f44db3-78d7-4707-af34-daf9f3bbc0bf-ring-data-devices\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.570867 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/92f44db3-78d7-4707-af34-daf9f3bbc0bf-etc-swift\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.572093 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92f44db3-78d7-4707-af34-daf9f3bbc0bf-scripts\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.576927 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-dispersionconf\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.577307 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-combined-ca-bundle\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.589014 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-swiftconf\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.601625 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d9fk\" (UniqueName: \"kubernetes.io/projected/92f44db3-78d7-4707-af34-daf9f3bbc0bf-kube-api-access-8d9fk\") pod \"swift-ring-rebalance-dc8sm\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.611664 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-b7rzr"] Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.614053 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.617882 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.640815 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-b7rzr"] Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.671628 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-config\") pod \"dnsmasq-dns-6c89d5d749-b7rzr\" (UID: \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\") " pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.671720 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-b7rzr\" (UID: \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\") " pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.671746 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-b7rzr\" (UID: \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\") " pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.671864 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p426f\" (UniqueName: \"kubernetes.io/projected/deae29d8-abfa-4fe4-8314-b02cf70eb5be-kube-api-access-p426f\") pod \"dnsmasq-dns-6c89d5d749-b7rzr\" (UID: \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\") " pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.696088 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.724071 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.735696 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-4gz6p"] Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.737447 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.741562 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.748731 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-4gz6p"] Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.774765 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e8f5ec-ba3d-4962-97f1-2be3a087852e-config\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.776021 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-config\") pod \"dnsmasq-dns-6c89d5d749-b7rzr\" (UID: \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\") " pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.776373 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-b7rzr\" (UID: \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\") " pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.776709 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-b7rzr\" (UID: \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\") " pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.776883 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pwkl\" (UniqueName: \"kubernetes.io/projected/43e8f5ec-ba3d-4962-97f1-2be3a087852e-kube-api-access-5pwkl\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.777026 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e8f5ec-ba3d-4962-97f1-2be3a087852e-combined-ca-bundle\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.777225 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/43e8f5ec-ba3d-4962-97f1-2be3a087852e-ovs-rundir\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.778794 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/43e8f5ec-ba3d-4962-97f1-2be3a087852e-ovn-rundir\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.779098 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p426f\" (UniqueName: \"kubernetes.io/projected/deae29d8-abfa-4fe4-8314-b02cf70eb5be-kube-api-access-p426f\") pod \"dnsmasq-dns-6c89d5d749-b7rzr\" (UID: \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\") " pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.779207 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/43e8f5ec-ba3d-4962-97f1-2be3a087852e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.780774 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-dns-svc\") pod \"dnsmasq-dns-6c89d5d749-b7rzr\" (UID: \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\") " pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.781045 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-ovsdbserver-sb\") pod \"dnsmasq-dns-6c89d5d749-b7rzr\" (UID: \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\") " pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.781340 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-config\") pod \"dnsmasq-dns-6c89d5d749-b7rzr\" (UID: \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\") " pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.806605 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p426f\" (UniqueName: \"kubernetes.io/projected/deae29d8-abfa-4fe4-8314-b02cf70eb5be-kube-api-access-p426f\") pod \"dnsmasq-dns-6c89d5d749-b7rzr\" (UID: \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\") " pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.840860 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.886137 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/43e8f5ec-ba3d-4962-97f1-2be3a087852e-ovn-rundir\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.886241 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/43e8f5ec-ba3d-4962-97f1-2be3a087852e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.886287 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e8f5ec-ba3d-4962-97f1-2be3a087852e-config\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.886414 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pwkl\" (UniqueName: \"kubernetes.io/projected/43e8f5ec-ba3d-4962-97f1-2be3a087852e-kube-api-access-5pwkl\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.886437 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e8f5ec-ba3d-4962-97f1-2be3a087852e-combined-ca-bundle\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.886467 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/43e8f5ec-ba3d-4962-97f1-2be3a087852e-ovs-rundir\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.887098 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/43e8f5ec-ba3d-4962-97f1-2be3a087852e-ovs-rundir\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.887181 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/43e8f5ec-ba3d-4962-97f1-2be3a087852e-ovn-rundir\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.889674 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43e8f5ec-ba3d-4962-97f1-2be3a087852e-config\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.898688 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43e8f5ec-ba3d-4962-97f1-2be3a087852e-combined-ca-bundle\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.900785 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/43e8f5ec-ba3d-4962-97f1-2be3a087852e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.926234 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pwkl\" (UniqueName: \"kubernetes.io/projected/43e8f5ec-ba3d-4962-97f1-2be3a087852e-kube-api-access-5pwkl\") pod \"ovn-controller-metrics-4gz6p\" (UID: \"43e8f5ec-ba3d-4962-97f1-2be3a087852e\") " pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:36 crc kubenswrapper[4867]: I0214 04:29:36.992122 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.012702 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbe41be0-f7f8-47ff-a587-b85e282fa5ee" path="/var/lib/kubelet/pods/dbe41be0-f7f8-47ff-a587-b85e282fa5ee/volumes" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.096165 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-cl29c"] Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.109821 4867 generic.go:334] "Generic (PLEG): container finished" podID="fa85f647-f104-47eb-800c-5926241431c6" containerID="7059749a0f090f4fcadd34570c504de064398543b7a31431508b3c8aff49c475" exitCode=0 Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.109895 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" event={"ID":"fa85f647-f104-47eb-800c-5926241431c6","Type":"ContainerDied","Data":"7059749a0f090f4fcadd34570c504de064398543b7a31431508b3c8aff49c475"} Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.113413 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"505de461-9e6f-4914-bf50-e2bf4149b566","Type":"ContainerStarted","Data":"6112e5b28cbdeaa3d1c11987b58af4ae7e622169b457b89afe74d3879df320fd"} Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.114384 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.116261 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.126833 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-cp76f"] Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.132081 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.136794 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.139799 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-cp76f"] Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.141232 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.150390 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-4gz6p" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.256441 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.301593 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.306889 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-scripts\") pod \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.311698 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-scripts" (OuterVolumeSpecName: "scripts") pod "2eb35c23-c6de-46f0-a7bf-8390d9eefd42" (UID: "2eb35c23-c6de-46f0-a7bf-8390d9eefd42"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.325579 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-swiftconf\") pod \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.325664 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqpbj\" (UniqueName: \"kubernetes.io/projected/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-kube-api-access-tqpbj\") pod \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.325695 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-ring-data-devices\") pod \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.325785 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-dispersionconf\") pod \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.325815 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-combined-ca-bundle\") pod \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.325871 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-etc-swift\") pod \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\" (UID: \"2eb35c23-c6de-46f0-a7bf-8390d9eefd42\") " Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.326622 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-config\") pod \"dnsmasq-dns-698758b865-cp76f\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.326841 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-dns-svc\") pod \"dnsmasq-dns-698758b865-cp76f\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.326874 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-cp76f\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.327063 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-cp76f\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.327111 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gndq6\" (UniqueName: \"kubernetes.io/projected/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-kube-api-access-gndq6\") pod \"dnsmasq-dns-698758b865-cp76f\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.327170 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.344694 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-kube-api-access-tqpbj" (OuterVolumeSpecName: "kube-api-access-tqpbj") pod "2eb35c23-c6de-46f0-a7bf-8390d9eefd42" (UID: "2eb35c23-c6de-46f0-a7bf-8390d9eefd42"). InnerVolumeSpecName "kube-api-access-tqpbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.345095 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "2eb35c23-c6de-46f0-a7bf-8390d9eefd42" (UID: "2eb35c23-c6de-46f0-a7bf-8390d9eefd42"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.363926 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "2eb35c23-c6de-46f0-a7bf-8390d9eefd42" (UID: "2eb35c23-c6de-46f0-a7bf-8390d9eefd42"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.364818 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "2eb35c23-c6de-46f0-a7bf-8390d9eefd42" (UID: "2eb35c23-c6de-46f0-a7bf-8390d9eefd42"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.374472 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "2eb35c23-c6de-46f0-a7bf-8390d9eefd42" (UID: "2eb35c23-c6de-46f0-a7bf-8390d9eefd42"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.482962 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.483329 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-config\") pod \"dnsmasq-dns-698758b865-cp76f\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.483593 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-dns-svc\") pod \"dnsmasq-dns-698758b865-cp76f\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.483660 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-cp76f\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: E0214 04:29:37.483826 4867 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 14 04:29:37 crc kubenswrapper[4867]: E0214 04:29:37.483922 4867 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 14 04:29:37 crc kubenswrapper[4867]: E0214 04:29:37.484091 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift podName:1d9f9909-1442-4d83-b2aa-0f58d4022338 nodeName:}" failed. No retries permitted until 2026-02-14 04:29:39.484061178 +0000 UTC m=+1211.564998492 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift") pod "swift-storage-0" (UID: "1d9f9909-1442-4d83-b2aa-0f58d4022338") : configmap "swift-ring-files" not found Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.483868 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-cp76f\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.484730 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-cp76f\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.484914 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gndq6\" (UniqueName: \"kubernetes.io/projected/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-kube-api-access-gndq6\") pod \"dnsmasq-dns-698758b865-cp76f\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.484640 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-config\") pod \"dnsmasq-dns-698758b865-cp76f\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.485336 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-dns-svc\") pod \"dnsmasq-dns-698758b865-cp76f\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.485486 4867 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.485540 4867 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.485554 4867 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.485568 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqpbj\" (UniqueName: \"kubernetes.io/projected/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-kube-api-access-tqpbj\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.485581 4867 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.486287 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-cp76f\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.512361 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2eb35c23-c6de-46f0-a7bf-8390d9eefd42" (UID: "2eb35c23-c6de-46f0-a7bf-8390d9eefd42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.554441 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gndq6\" (UniqueName: \"kubernetes.io/projected/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-kube-api-access-gndq6\") pod \"dnsmasq-dns-698758b865-cp76f\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.573107 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dc8sm"] Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.608225 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-dns-svc\") pod \"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c\" (UID: \"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c\") " Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.608278 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-config\") pod \"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c\" (UID: \"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c\") " Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.608549 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xj27v\" (UniqueName: \"kubernetes.io/projected/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-kube-api-access-xj27v\") pod \"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c\" (UID: \"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c\") " Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.609040 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eb35c23-c6de-46f0-a7bf-8390d9eefd42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.609991 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c" (UID: "a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.610266 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-config" (OuterVolumeSpecName: "config") pod "a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c" (UID: "a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.618753 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-kube-api-access-xj27v" (OuterVolumeSpecName: "kube-api-access-xj27v") pod "a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c" (UID: "a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c"). InnerVolumeSpecName "kube-api-access-xj27v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.716780 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xj27v\" (UniqueName: \"kubernetes.io/projected/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-kube-api-access-xj27v\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.716815 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.716825 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.765173 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.777085 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.779288 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.792535 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-jjsz4" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.792783 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.792960 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.794011 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.813181 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.924880 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0552eb77-2bc5-49dd-911e-f08071a83da9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.924935 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0552eb77-2bc5-49dd-911e-f08071a83da9-config\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.925022 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0552eb77-2bc5-49dd-911e-f08071a83da9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.925081 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh6pq\" (UniqueName: \"kubernetes.io/projected/0552eb77-2bc5-49dd-911e-f08071a83da9-kube-api-access-kh6pq\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.925129 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0552eb77-2bc5-49dd-911e-f08071a83da9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.925167 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0552eb77-2bc5-49dd-911e-f08071a83da9-scripts\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:37 crc kubenswrapper[4867]: I0214 04:29:37.925195 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0552eb77-2bc5-49dd-911e-f08071a83da9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.028568 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0552eb77-2bc5-49dd-911e-f08071a83da9-scripts\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.028643 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0552eb77-2bc5-49dd-911e-f08071a83da9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.028735 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0552eb77-2bc5-49dd-911e-f08071a83da9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.028774 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0552eb77-2bc5-49dd-911e-f08071a83da9-config\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.028884 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0552eb77-2bc5-49dd-911e-f08071a83da9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.028954 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh6pq\" (UniqueName: \"kubernetes.io/projected/0552eb77-2bc5-49dd-911e-f08071a83da9-kube-api-access-kh6pq\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.028995 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0552eb77-2bc5-49dd-911e-f08071a83da9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.030447 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0552eb77-2bc5-49dd-911e-f08071a83da9-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.031077 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0552eb77-2bc5-49dd-911e-f08071a83da9-scripts\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.035435 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0552eb77-2bc5-49dd-911e-f08071a83da9-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.035721 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/0552eb77-2bc5-49dd-911e-f08071a83da9-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.036848 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0552eb77-2bc5-49dd-911e-f08071a83da9-config\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.043672 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0552eb77-2bc5-49dd-911e-f08071a83da9-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.080046 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh6pq\" (UniqueName: \"kubernetes.io/projected/0552eb77-2bc5-49dd-911e-f08071a83da9-kube-api-access-kh6pq\") pod \"ovn-northd-0\" (UID: \"0552eb77-2bc5-49dd-911e-f08071a83da9\") " pod="openstack/ovn-northd-0" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.119848 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-b7rzr"] Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.146907 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-4gz6p"] Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.158685 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dc8sm" event={"ID":"92f44db3-78d7-4707-af34-daf9f3bbc0bf","Type":"ContainerStarted","Data":"2560c4e53d69d39e5b6393b89e72bba71dd48e723971acf1a56bff692ff3065d"} Feb 14 04:29:38 crc kubenswrapper[4867]: W0214 04:29:38.160821 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43e8f5ec_ba3d_4962_97f1_2be3a087852e.slice/crio-d30a2736478f9fa24942a5f0daa69bfafa12068836ee45d12cbe6581c5ac334b WatchSource:0}: Error finding container d30a2736478f9fa24942a5f0daa69bfafa12068836ee45d12cbe6581c5ac334b: Status 404 returned error can't find the container with id d30a2736478f9fa24942a5f0daa69bfafa12068836ee45d12cbe6581c5ac334b Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.166011 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-492b9" event={"ID":"701367b7-aef6-43b5-a0f9-3a91206962de","Type":"ContainerStarted","Data":"781c47958fe4be489d80deefc216efc94eebd58f1a594f810fd549eb698505ed"} Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.170207 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" event={"ID":"a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c","Type":"ContainerDied","Data":"9892bc720311d5c087d97016222dedfbfd5d79d98d86d65c02c43134fdd42239"} Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.170275 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-hxkz7" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.170316 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-27bx5" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.208475 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.255411 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-492b9" podStartSLOduration=3.369719905 podStartE2EDuration="44.25538503s" podCreationTimestamp="2026-02-14 04:28:54 +0000 UTC" firstStartedPulling="2026-02-14 04:28:56.259984291 +0000 UTC m=+1168.340921605" lastFinishedPulling="2026-02-14 04:29:37.145649416 +0000 UTC m=+1209.226586730" observedRunningTime="2026-02-14 04:29:38.203153352 +0000 UTC m=+1210.284090666" watchObservedRunningTime="2026-02-14 04:29:38.25538503 +0000 UTC m=+1210.336322344" Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.283523 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hxkz7"] Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.293259 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hxkz7"] Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.319707 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-27bx5"] Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.332624 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-27bx5"] Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.578953 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-cp76f"] Feb 14 04:29:38 crc kubenswrapper[4867]: I0214 04:29:38.893683 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.019744 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2eb35c23-c6de-46f0-a7bf-8390d9eefd42" path="/var/lib/kubelet/pods/2eb35c23-c6de-46f0-a7bf-8390d9eefd42/volumes" Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.020932 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c" path="/var/lib/kubelet/pods/a5f0e82b-f765-4fe1-b74e-856e1a6d8b8c/volumes" Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.199167 4867 generic.go:334] "Generic (PLEG): container finished" podID="deae29d8-abfa-4fe4-8314-b02cf70eb5be" containerID="b1095c8191bae78e5faa82320823678ede638e643a2b7ac06c8450de766b1b8a" exitCode=0 Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.199983 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" event={"ID":"deae29d8-abfa-4fe4-8314-b02cf70eb5be","Type":"ContainerDied","Data":"b1095c8191bae78e5faa82320823678ede638e643a2b7ac06c8450de766b1b8a"} Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.200203 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" event={"ID":"deae29d8-abfa-4fe4-8314-b02cf70eb5be","Type":"ContainerStarted","Data":"a0ffea8d48e001e089ae4bf9bd0aae709da26664483eee7930b16346800bdb97"} Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.211721 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" event={"ID":"fa85f647-f104-47eb-800c-5926241431c6","Type":"ContainerStarted","Data":"f602853b9c9099bd5cca86b27c567097f0af7a70be9d8b6daffa58b6753bb07c"} Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.211944 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" podUID="fa85f647-f104-47eb-800c-5926241431c6" containerName="dnsmasq-dns" containerID="cri-o://f602853b9c9099bd5cca86b27c567097f0af7a70be9d8b6daffa58b6753bb07c" gracePeriod=10 Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.212282 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.216671 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0552eb77-2bc5-49dd-911e-f08071a83da9","Type":"ContainerStarted","Data":"7d5658b951af8fdef68cbab2977b1cf3210f036612287fad2460830c62bef625"} Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.223791 4867 generic.go:334] "Generic (PLEG): container finished" podID="af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7" containerID="06776f7c91b51ef4ae24e9a96a1d7ce732c0aeceef3722062fef6d1c2167d74f" exitCode=0 Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.224398 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-cp76f" event={"ID":"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7","Type":"ContainerDied","Data":"06776f7c91b51ef4ae24e9a96a1d7ce732c0aeceef3722062fef6d1c2167d74f"} Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.224463 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-cp76f" event={"ID":"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7","Type":"ContainerStarted","Data":"41aaccd20d5bf4daeae755d0c155b427f29d56138b6d3562c58792965bd5ee9b"} Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.228422 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4gz6p" event={"ID":"43e8f5ec-ba3d-4962-97f1-2be3a087852e","Type":"ContainerStarted","Data":"502b1bca37b9d2434aff0aaa6973356854bda054b38f0ecd6832adcbe53c59f9"} Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.228472 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-4gz6p" event={"ID":"43e8f5ec-ba3d-4962-97f1-2be3a087852e","Type":"ContainerStarted","Data":"d30a2736478f9fa24942a5f0daa69bfafa12068836ee45d12cbe6581c5ac334b"} Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.245402 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" podStartSLOduration=4.820032128 podStartE2EDuration="5.245381001s" podCreationTimestamp="2026-02-14 04:29:34 +0000 UTC" firstStartedPulling="2026-02-14 04:29:35.341412998 +0000 UTC m=+1207.422350312" lastFinishedPulling="2026-02-14 04:29:35.766761871 +0000 UTC m=+1207.847699185" observedRunningTime="2026-02-14 04:29:39.238190704 +0000 UTC m=+1211.319128018" watchObservedRunningTime="2026-02-14 04:29:39.245381001 +0000 UTC m=+1211.326318315" Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.307593 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-4gz6p" podStartSLOduration=3.307382613 podStartE2EDuration="3.307382613s" podCreationTimestamp="2026-02-14 04:29:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:29:39.287163257 +0000 UTC m=+1211.368100601" watchObservedRunningTime="2026-02-14 04:29:39.307382613 +0000 UTC m=+1211.388319927" Feb 14 04:29:39 crc kubenswrapper[4867]: I0214 04:29:39.493154 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:39 crc kubenswrapper[4867]: E0214 04:29:39.493409 4867 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 14 04:29:39 crc kubenswrapper[4867]: E0214 04:29:39.493793 4867 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 14 04:29:39 crc kubenswrapper[4867]: E0214 04:29:39.493865 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift podName:1d9f9909-1442-4d83-b2aa-0f58d4022338 nodeName:}" failed. No retries permitted until 2026-02-14 04:29:43.493843193 +0000 UTC m=+1215.574780507 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift") pod "swift-storage-0" (UID: "1d9f9909-1442-4d83-b2aa-0f58d4022338") : configmap "swift-ring-files" not found Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.094453 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.215187 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zs8x\" (UniqueName: \"kubernetes.io/projected/fa85f647-f104-47eb-800c-5926241431c6-kube-api-access-8zs8x\") pod \"fa85f647-f104-47eb-800c-5926241431c6\" (UID: \"fa85f647-f104-47eb-800c-5926241431c6\") " Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.215585 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa85f647-f104-47eb-800c-5926241431c6-config\") pod \"fa85f647-f104-47eb-800c-5926241431c6\" (UID: \"fa85f647-f104-47eb-800c-5926241431c6\") " Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.215644 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa85f647-f104-47eb-800c-5926241431c6-dns-svc\") pod \"fa85f647-f104-47eb-800c-5926241431c6\" (UID: \"fa85f647-f104-47eb-800c-5926241431c6\") " Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.225190 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa85f647-f104-47eb-800c-5926241431c6-kube-api-access-8zs8x" (OuterVolumeSpecName: "kube-api-access-8zs8x") pod "fa85f647-f104-47eb-800c-5926241431c6" (UID: "fa85f647-f104-47eb-800c-5926241431c6"). InnerVolumeSpecName "kube-api-access-8zs8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.243706 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" event={"ID":"deae29d8-abfa-4fe4-8314-b02cf70eb5be","Type":"ContainerStarted","Data":"d9bc20eb397e5cdd69feae306038d003a806f85daf7db6e801792855182536ab"} Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.243889 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.246037 4867 generic.go:334] "Generic (PLEG): container finished" podID="fa85f647-f104-47eb-800c-5926241431c6" containerID="f602853b9c9099bd5cca86b27c567097f0af7a70be9d8b6daffa58b6753bb07c" exitCode=0 Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.246128 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" event={"ID":"fa85f647-f104-47eb-800c-5926241431c6","Type":"ContainerDied","Data":"f602853b9c9099bd5cca86b27c567097f0af7a70be9d8b6daffa58b6753bb07c"} Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.246167 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" event={"ID":"fa85f647-f104-47eb-800c-5926241431c6","Type":"ContainerDied","Data":"b689e14869d0b7bebda2bfe1f81a3f0324cf2d9cbabff503414d1c60e7a92163"} Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.246190 4867 scope.go:117] "RemoveContainer" containerID="f602853b9c9099bd5cca86b27c567097f0af7a70be9d8b6daffa58b6753bb07c" Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.246256 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-cl29c" Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.248232 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-cp76f" event={"ID":"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7","Type":"ContainerStarted","Data":"287c9079ac6589b0605e06afbed45de89a1a8760239c3526cc0564c3247ada73"} Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.248934 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.251204 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"647ba30a-5526-4e27-9095-680c31ff4eb3","Type":"ContainerStarted","Data":"2985355e95eee0dc957c0e21e160693198281b44121fdf6f1cd86e16275d7eea"} Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.253687 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e1e022d9-e2db-41eb-bbc8-36a85211a141","Type":"ContainerStarted","Data":"262c6cf6afafb6e46f694f14f681aa82c37388eec461cacbdee05ba39ec4b230"} Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.275451 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" podStartSLOduration=4.275429842 podStartE2EDuration="4.275429842s" podCreationTimestamp="2026-02-14 04:29:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:29:40.266933511 +0000 UTC m=+1212.347870825" watchObservedRunningTime="2026-02-14 04:29:40.275429842 +0000 UTC m=+1212.356367146" Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.303190 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa85f647-f104-47eb-800c-5926241431c6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fa85f647-f104-47eb-800c-5926241431c6" (UID: "fa85f647-f104-47eb-800c-5926241431c6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.314862 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-cp76f" podStartSLOduration=3.314841297 podStartE2EDuration="3.314841297s" podCreationTimestamp="2026-02-14 04:29:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:29:40.308574004 +0000 UTC m=+1212.389511328" watchObservedRunningTime="2026-02-14 04:29:40.314841297 +0000 UTC m=+1212.395778611" Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.318120 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa85f647-f104-47eb-800c-5926241431c6-config" (OuterVolumeSpecName: "config") pod "fa85f647-f104-47eb-800c-5926241431c6" (UID: "fa85f647-f104-47eb-800c-5926241431c6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.318150 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa85f647-f104-47eb-800c-5926241431c6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.318185 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zs8x\" (UniqueName: \"kubernetes.io/projected/fa85f647-f104-47eb-800c-5926241431c6-kube-api-access-8zs8x\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.419845 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa85f647-f104-47eb-800c-5926241431c6-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.497491 4867 scope.go:117] "RemoveContainer" containerID="7059749a0f090f4fcadd34570c504de064398543b7a31431508b3c8aff49c475" Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.608777 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-cl29c"] Feb 14 04:29:40 crc kubenswrapper[4867]: I0214 04:29:40.624601 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-cl29c"] Feb 14 04:29:41 crc kubenswrapper[4867]: I0214 04:29:41.017853 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa85f647-f104-47eb-800c-5926241431c6" path="/var/lib/kubelet/pods/fa85f647-f104-47eb-800c-5926241431c6/volumes" Feb 14 04:29:41 crc kubenswrapper[4867]: I0214 04:29:41.265700 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9bba5174-edd6-4e59-8b84-6c50439be88e","Type":"ContainerStarted","Data":"cdd34e48fd8308f6fcb0879223cfb287fe4fad8d2d81caedd7f537716f873d08"} Feb 14 04:29:41 crc kubenswrapper[4867]: I0214 04:29:41.268764 4867 generic.go:334] "Generic (PLEG): container finished" podID="b27199a8-11ac-4e59-90b8-b42387dd6dd2" containerID="88cb930154e07e378cec2e1f6e9deef9c47de4c5b43c2284262de9eb71194722" exitCode=0 Feb 14 04:29:41 crc kubenswrapper[4867]: I0214 04:29:41.268852 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b27199a8-11ac-4e59-90b8-b42387dd6dd2","Type":"ContainerDied","Data":"88cb930154e07e378cec2e1f6e9deef9c47de4c5b43c2284262de9eb71194722"} Feb 14 04:29:41 crc kubenswrapper[4867]: I0214 04:29:41.272334 4867 generic.go:334] "Generic (PLEG): container finished" podID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerID="a1fd36c74b9a00850c975f49583fd6e7537b5b3ab16d29f2ed2f5ae6fb4437b4" exitCode=0 Feb 14 04:29:41 crc kubenswrapper[4867]: I0214 04:29:41.273265 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c755009c-2bb6-4f8f-9b53-460a0e4c9447","Type":"ContainerDied","Data":"a1fd36c74b9a00850c975f49583fd6e7537b5b3ab16d29f2ed2f5ae6fb4437b4"} Feb 14 04:29:42 crc kubenswrapper[4867]: I0214 04:29:42.285293 4867 generic.go:334] "Generic (PLEG): container finished" podID="505de461-9e6f-4914-bf50-e2bf4149b566" containerID="6112e5b28cbdeaa3d1c11987b58af4ae7e622169b457b89afe74d3879df320fd" exitCode=0 Feb 14 04:29:42 crc kubenswrapper[4867]: I0214 04:29:42.285353 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"505de461-9e6f-4914-bf50-e2bf4149b566","Type":"ContainerDied","Data":"6112e5b28cbdeaa3d1c11987b58af4ae7e622169b457b89afe74d3879df320fd"} Feb 14 04:29:42 crc kubenswrapper[4867]: I0214 04:29:42.745107 4867 scope.go:117] "RemoveContainer" containerID="f602853b9c9099bd5cca86b27c567097f0af7a70be9d8b6daffa58b6753bb07c" Feb 14 04:29:42 crc kubenswrapper[4867]: E0214 04:29:42.747771 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f602853b9c9099bd5cca86b27c567097f0af7a70be9d8b6daffa58b6753bb07c\": container with ID starting with f602853b9c9099bd5cca86b27c567097f0af7a70be9d8b6daffa58b6753bb07c not found: ID does not exist" containerID="f602853b9c9099bd5cca86b27c567097f0af7a70be9d8b6daffa58b6753bb07c" Feb 14 04:29:42 crc kubenswrapper[4867]: I0214 04:29:42.747819 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f602853b9c9099bd5cca86b27c567097f0af7a70be9d8b6daffa58b6753bb07c"} err="failed to get container status \"f602853b9c9099bd5cca86b27c567097f0af7a70be9d8b6daffa58b6753bb07c\": rpc error: code = NotFound desc = could not find container \"f602853b9c9099bd5cca86b27c567097f0af7a70be9d8b6daffa58b6753bb07c\": container with ID starting with f602853b9c9099bd5cca86b27c567097f0af7a70be9d8b6daffa58b6753bb07c not found: ID does not exist" Feb 14 04:29:42 crc kubenswrapper[4867]: I0214 04:29:42.747848 4867 scope.go:117] "RemoveContainer" containerID="7059749a0f090f4fcadd34570c504de064398543b7a31431508b3c8aff49c475" Feb 14 04:29:42 crc kubenswrapper[4867]: E0214 04:29:42.748354 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7059749a0f090f4fcadd34570c504de064398543b7a31431508b3c8aff49c475\": container with ID starting with 7059749a0f090f4fcadd34570c504de064398543b7a31431508b3c8aff49c475 not found: ID does not exist" containerID="7059749a0f090f4fcadd34570c504de064398543b7a31431508b3c8aff49c475" Feb 14 04:29:42 crc kubenswrapper[4867]: I0214 04:29:42.748378 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7059749a0f090f4fcadd34570c504de064398543b7a31431508b3c8aff49c475"} err="failed to get container status \"7059749a0f090f4fcadd34570c504de064398543b7a31431508b3c8aff49c475\": rpc error: code = NotFound desc = could not find container \"7059749a0f090f4fcadd34570c504de064398543b7a31431508b3c8aff49c475\": container with ID starting with 7059749a0f090f4fcadd34570c504de064398543b7a31431508b3c8aff49c475 not found: ID does not exist" Feb 14 04:29:43 crc kubenswrapper[4867]: I0214 04:29:43.297035 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"505de461-9e6f-4914-bf50-e2bf4149b566","Type":"ContainerStarted","Data":"339fe681bb88adb32b1f3cac0ab3a9a7c019700102a8ea9f39f2eb6eacf010e9"} Feb 14 04:29:43 crc kubenswrapper[4867]: I0214 04:29:43.302176 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b27199a8-11ac-4e59-90b8-b42387dd6dd2","Type":"ContainerStarted","Data":"fcaa00f4074b2721a8dae207c9036fd698a9b4947b9c404b3f74667a5403e217"} Feb 14 04:29:43 crc kubenswrapper[4867]: I0214 04:29:43.305482 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dc8sm" event={"ID":"92f44db3-78d7-4707-af34-daf9f3bbc0bf","Type":"ContainerStarted","Data":"fff43a494e3449e28ca6700d0874bdb37750b54043064c0f45ea967f6e1b3a87"} Feb 14 04:29:43 crc kubenswrapper[4867]: I0214 04:29:43.307718 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0552eb77-2bc5-49dd-911e-f08071a83da9","Type":"ContainerStarted","Data":"6be4d4eb29aec6a4a6bed660df9a7013dba5f0240aa9354739d1d64a318f086d"} Feb 14 04:29:43 crc kubenswrapper[4867]: I0214 04:29:43.319920 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371982.534874 podStartE2EDuration="54.31990206s" podCreationTimestamp="2026-02-14 04:28:49 +0000 UTC" firstStartedPulling="2026-02-14 04:28:53.078831149 +0000 UTC m=+1165.159768463" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:29:43.319288414 +0000 UTC m=+1215.400225728" watchObservedRunningTime="2026-02-14 04:29:43.31990206 +0000 UTC m=+1215.400839374" Feb 14 04:29:43 crc kubenswrapper[4867]: I0214 04:29:43.353701 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=13.175407872 podStartE2EDuration="56.353659108s" podCreationTimestamp="2026-02-14 04:28:47 +0000 UTC" firstStartedPulling="2026-02-14 04:28:50.57566008 +0000 UTC m=+1162.656597394" lastFinishedPulling="2026-02-14 04:29:33.753911306 +0000 UTC m=+1205.834848630" observedRunningTime="2026-02-14 04:29:43.344631103 +0000 UTC m=+1215.425568417" watchObservedRunningTime="2026-02-14 04:29:43.353659108 +0000 UTC m=+1215.434596432" Feb 14 04:29:43 crc kubenswrapper[4867]: I0214 04:29:43.368078 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-dc8sm" podStartSLOduration=2.17615468 podStartE2EDuration="7.368060983s" podCreationTimestamp="2026-02-14 04:29:36 +0000 UTC" firstStartedPulling="2026-02-14 04:29:37.606525503 +0000 UTC m=+1209.687462817" lastFinishedPulling="2026-02-14 04:29:42.798431806 +0000 UTC m=+1214.879369120" observedRunningTime="2026-02-14 04:29:43.364706515 +0000 UTC m=+1215.445643829" watchObservedRunningTime="2026-02-14 04:29:43.368060983 +0000 UTC m=+1215.448998297" Feb 14 04:29:43 crc kubenswrapper[4867]: I0214 04:29:43.500185 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:43 crc kubenswrapper[4867]: E0214 04:29:43.500472 4867 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 14 04:29:43 crc kubenswrapper[4867]: E0214 04:29:43.500538 4867 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 14 04:29:43 crc kubenswrapper[4867]: E0214 04:29:43.500616 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift podName:1d9f9909-1442-4d83-b2aa-0f58d4022338 nodeName:}" failed. No retries permitted until 2026-02-14 04:29:51.50059229 +0000 UTC m=+1223.581529604 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift") pod "swift-storage-0" (UID: "1d9f9909-1442-4d83-b2aa-0f58d4022338") : configmap "swift-ring-files" not found Feb 14 04:29:46 crc kubenswrapper[4867]: I0214 04:29:46.350328 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"6bc83863-74f4-4509-969c-0f3305a542a8","Type":"ContainerStarted","Data":"da72547c3496fadaa474b36d059bf8582881ee27c6b6aa73c9aa360c8e76f26d"} Feb 14 04:29:46 crc kubenswrapper[4867]: E0214 04:29:46.774326 4867 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.113:39416->38.102.83.113:33373: write tcp 38.102.83.113:39416->38.102.83.113:33373: write: connection reset by peer Feb 14 04:29:46 crc kubenswrapper[4867]: I0214 04:29:46.994721 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:47 crc kubenswrapper[4867]: I0214 04:29:47.768765 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:29:47 crc kubenswrapper[4867]: I0214 04:29:47.830002 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-b7rzr"] Feb 14 04:29:47 crc kubenswrapper[4867]: I0214 04:29:47.833856 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" podUID="deae29d8-abfa-4fe4-8314-b02cf70eb5be" containerName="dnsmasq-dns" containerID="cri-o://d9bc20eb397e5cdd69feae306038d003a806f85daf7db6e801792855182536ab" gracePeriod=10 Feb 14 04:29:48 crc kubenswrapper[4867]: I0214 04:29:48.381120 4867 generic.go:334] "Generic (PLEG): container finished" podID="deae29d8-abfa-4fe4-8314-b02cf70eb5be" containerID="d9bc20eb397e5cdd69feae306038d003a806f85daf7db6e801792855182536ab" exitCode=0 Feb 14 04:29:48 crc kubenswrapper[4867]: I0214 04:29:48.381179 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" event={"ID":"deae29d8-abfa-4fe4-8314-b02cf70eb5be","Type":"ContainerDied","Data":"d9bc20eb397e5cdd69feae306038d003a806f85daf7db6e801792855182536ab"} Feb 14 04:29:49 crc kubenswrapper[4867]: I0214 04:29:49.632272 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 14 04:29:49 crc kubenswrapper[4867]: I0214 04:29:49.632671 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 14 04:29:49 crc kubenswrapper[4867]: I0214 04:29:49.982008 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 14 04:29:50 crc kubenswrapper[4867]: I0214 04:29:50.514531 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 14 04:29:50 crc kubenswrapper[4867]: I0214 04:29:50.896490 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:50 crc kubenswrapper[4867]: I0214 04:29:50.983870 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-config\") pod \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\" (UID: \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\") " Feb 14 04:29:50 crc kubenswrapper[4867]: I0214 04:29:50.984376 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p426f\" (UniqueName: \"kubernetes.io/projected/deae29d8-abfa-4fe4-8314-b02cf70eb5be-kube-api-access-p426f\") pod \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\" (UID: \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\") " Feb 14 04:29:50 crc kubenswrapper[4867]: I0214 04:29:50.984452 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-dns-svc\") pod \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\" (UID: \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\") " Feb 14 04:29:50 crc kubenswrapper[4867]: I0214 04:29:50.984580 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-ovsdbserver-sb\") pod \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\" (UID: \"deae29d8-abfa-4fe4-8314-b02cf70eb5be\") " Feb 14 04:29:50 crc kubenswrapper[4867]: I0214 04:29:50.988669 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deae29d8-abfa-4fe4-8314-b02cf70eb5be-kube-api-access-p426f" (OuterVolumeSpecName: "kube-api-access-p426f") pod "deae29d8-abfa-4fe4-8314-b02cf70eb5be" (UID: "deae29d8-abfa-4fe4-8314-b02cf70eb5be"). InnerVolumeSpecName "kube-api-access-p426f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.044306 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-config" (OuterVolumeSpecName: "config") pod "deae29d8-abfa-4fe4-8314-b02cf70eb5be" (UID: "deae29d8-abfa-4fe4-8314-b02cf70eb5be"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.051153 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "deae29d8-abfa-4fe4-8314-b02cf70eb5be" (UID: "deae29d8-abfa-4fe4-8314-b02cf70eb5be"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.060426 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "deae29d8-abfa-4fe4-8314-b02cf70eb5be" (UID: "deae29d8-abfa-4fe4-8314-b02cf70eb5be"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.092365 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.092406 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.092421 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deae29d8-abfa-4fe4-8314-b02cf70eb5be-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.092435 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p426f\" (UniqueName: \"kubernetes.io/projected/deae29d8-abfa-4fe4-8314-b02cf70eb5be-kube-api-access-p426f\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.420390 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c755009c-2bb6-4f8f-9b53-460a0e4c9447","Type":"ContainerStarted","Data":"4692a5c730542a5c7abd2ae37dcefb0197b935ec9ce8b16d0469afd4527db7f5"} Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.422164 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" event={"ID":"deae29d8-abfa-4fe4-8314-b02cf70eb5be","Type":"ContainerDied","Data":"a0ffea8d48e001e089ae4bf9bd0aae709da26664483eee7930b16346800bdb97"} Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.422225 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c89d5d749-b7rzr" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.422210 4867 scope.go:117] "RemoveContainer" containerID="d9bc20eb397e5cdd69feae306038d003a806f85daf7db6e801792855182536ab" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.424020 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a78fec22-f395-42fc-a228-8d896580bc95","Type":"ContainerStarted","Data":"c6296689e104eeb9513087c1b6ad0a291438f63926c611686753788a4db4940c"} Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.424227 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.426878 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"0552eb77-2bc5-49dd-911e-f08071a83da9","Type":"ContainerStarted","Data":"45cdcdab2bca2f249b4526281374a26986b946d9e5b8bf5149fcc82a569681fc"} Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.426953 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.430009 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.430045 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.483623 4867 scope.go:117] "RemoveContainer" containerID="b1095c8191bae78e5faa82320823678ede638e643a2b7ac06c8450de766b1b8a" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.484195 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-cff6-account-create-update-ktnvw"] Feb 14 04:29:51 crc kubenswrapper[4867]: E0214 04:29:51.484593 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deae29d8-abfa-4fe4-8314-b02cf70eb5be" containerName="init" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.484608 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="deae29d8-abfa-4fe4-8314-b02cf70eb5be" containerName="init" Feb 14 04:29:51 crc kubenswrapper[4867]: E0214 04:29:51.484633 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa85f647-f104-47eb-800c-5926241431c6" containerName="dnsmasq-dns" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.484641 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa85f647-f104-47eb-800c-5926241431c6" containerName="dnsmasq-dns" Feb 14 04:29:51 crc kubenswrapper[4867]: E0214 04:29:51.484661 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa85f647-f104-47eb-800c-5926241431c6" containerName="init" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.484668 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa85f647-f104-47eb-800c-5926241431c6" containerName="init" Feb 14 04:29:51 crc kubenswrapper[4867]: E0214 04:29:51.484680 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deae29d8-abfa-4fe4-8314-b02cf70eb5be" containerName="dnsmasq-dns" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.484685 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="deae29d8-abfa-4fe4-8314-b02cf70eb5be" containerName="dnsmasq-dns" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.484889 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa85f647-f104-47eb-800c-5926241431c6" containerName="dnsmasq-dns" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.484899 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="deae29d8-abfa-4fe4-8314-b02cf70eb5be" containerName="dnsmasq-dns" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.485839 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-cff6-account-create-update-ktnvw" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.496921 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.501449 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:29:51 crc kubenswrapper[4867]: E0214 04:29:51.502058 4867 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 14 04:29:51 crc kubenswrapper[4867]: E0214 04:29:51.502088 4867 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 14 04:29:51 crc kubenswrapper[4867]: E0214 04:29:51.502141 4867 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift podName:1d9f9909-1442-4d83-b2aa-0f58d4022338 nodeName:}" failed. No retries permitted until 2026-02-14 04:30:07.502124551 +0000 UTC m=+1239.583061865 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift") pod "swift-storage-0" (UID: "1d9f9909-1442-4d83-b2aa-0f58d4022338") : configmap "swift-ring-files" not found Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.531699 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-cff6-account-create-update-ktnvw"] Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.554449 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=15.167913319 podStartE2EDuration="58.554427841s" podCreationTimestamp="2026-02-14 04:28:53 +0000 UTC" firstStartedPulling="2026-02-14 04:29:07.58666246 +0000 UTC m=+1179.667599774" lastFinishedPulling="2026-02-14 04:29:50.973176982 +0000 UTC m=+1223.054114296" observedRunningTime="2026-02-14 04:29:51.497262034 +0000 UTC m=+1223.578199348" watchObservedRunningTime="2026-02-14 04:29:51.554427841 +0000 UTC m=+1223.635365155" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.607340 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-t56pc"] Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.609664 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-t56pc" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.640949 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=10.869751372 podStartE2EDuration="14.640922141s" podCreationTimestamp="2026-02-14 04:29:37 +0000 UTC" firstStartedPulling="2026-02-14 04:29:38.974413313 +0000 UTC m=+1211.055350627" lastFinishedPulling="2026-02-14 04:29:42.745584082 +0000 UTC m=+1214.826521396" observedRunningTime="2026-02-14 04:29:51.528588309 +0000 UTC m=+1223.609525623" watchObservedRunningTime="2026-02-14 04:29:51.640922141 +0000 UTC m=+1223.721859455" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.690719 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-t56pc"] Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.705879 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b72434a2-25c0-4fd4-89cf-eff7bee167c3-operator-scripts\") pod \"glance-cff6-account-create-update-ktnvw\" (UID: \"b72434a2-25c0-4fd4-89cf-eff7bee167c3\") " pod="openstack/glance-cff6-account-create-update-ktnvw" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.705973 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fef49b7-7486-40dc-aedc-9814adb071e2-operator-scripts\") pod \"glance-db-create-t56pc\" (UID: \"0fef49b7-7486-40dc-aedc-9814adb071e2\") " pod="openstack/glance-db-create-t56pc" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.706061 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fxhr\" (UniqueName: \"kubernetes.io/projected/0fef49b7-7486-40dc-aedc-9814adb071e2-kube-api-access-9fxhr\") pod \"glance-db-create-t56pc\" (UID: \"0fef49b7-7486-40dc-aedc-9814adb071e2\") " pod="openstack/glance-db-create-t56pc" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.706187 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dwgw\" (UniqueName: \"kubernetes.io/projected/b72434a2-25c0-4fd4-89cf-eff7bee167c3-kube-api-access-4dwgw\") pod \"glance-cff6-account-create-update-ktnvw\" (UID: \"b72434a2-25c0-4fd4-89cf-eff7bee167c3\") " pod="openstack/glance-cff6-account-create-update-ktnvw" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.723593 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-b7rzr"] Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.733227 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c89d5d749-b7rzr"] Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.756053 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.808571 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fef49b7-7486-40dc-aedc-9814adb071e2-operator-scripts\") pod \"glance-db-create-t56pc\" (UID: \"0fef49b7-7486-40dc-aedc-9814adb071e2\") " pod="openstack/glance-db-create-t56pc" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.808670 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fxhr\" (UniqueName: \"kubernetes.io/projected/0fef49b7-7486-40dc-aedc-9814adb071e2-kube-api-access-9fxhr\") pod \"glance-db-create-t56pc\" (UID: \"0fef49b7-7486-40dc-aedc-9814adb071e2\") " pod="openstack/glance-db-create-t56pc" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.808776 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dwgw\" (UniqueName: \"kubernetes.io/projected/b72434a2-25c0-4fd4-89cf-eff7bee167c3-kube-api-access-4dwgw\") pod \"glance-cff6-account-create-update-ktnvw\" (UID: \"b72434a2-25c0-4fd4-89cf-eff7bee167c3\") " pod="openstack/glance-cff6-account-create-update-ktnvw" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.808845 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b72434a2-25c0-4fd4-89cf-eff7bee167c3-operator-scripts\") pod \"glance-cff6-account-create-update-ktnvw\" (UID: \"b72434a2-25c0-4fd4-89cf-eff7bee167c3\") " pod="openstack/glance-cff6-account-create-update-ktnvw" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.809453 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fef49b7-7486-40dc-aedc-9814adb071e2-operator-scripts\") pod \"glance-db-create-t56pc\" (UID: \"0fef49b7-7486-40dc-aedc-9814adb071e2\") " pod="openstack/glance-db-create-t56pc" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.809625 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b72434a2-25c0-4fd4-89cf-eff7bee167c3-operator-scripts\") pod \"glance-cff6-account-create-update-ktnvw\" (UID: \"b72434a2-25c0-4fd4-89cf-eff7bee167c3\") " pod="openstack/glance-cff6-account-create-update-ktnvw" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.829311 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fxhr\" (UniqueName: \"kubernetes.io/projected/0fef49b7-7486-40dc-aedc-9814adb071e2-kube-api-access-9fxhr\") pod \"glance-db-create-t56pc\" (UID: \"0fef49b7-7486-40dc-aedc-9814adb071e2\") " pod="openstack/glance-db-create-t56pc" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.838413 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dwgw\" (UniqueName: \"kubernetes.io/projected/b72434a2-25c0-4fd4-89cf-eff7bee167c3-kube-api-access-4dwgw\") pod \"glance-cff6-account-create-update-ktnvw\" (UID: \"b72434a2-25c0-4fd4-89cf-eff7bee167c3\") " pod="openstack/glance-cff6-account-create-update-ktnvw" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.847427 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-cff6-account-create-update-ktnvw" Feb 14 04:29:51 crc kubenswrapper[4867]: I0214 04:29:51.943791 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-t56pc" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.314617 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-cff6-account-create-update-ktnvw"] Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.441690 4867 generic.go:334] "Generic (PLEG): container finished" podID="92f44db3-78d7-4707-af34-daf9f3bbc0bf" containerID="fff43a494e3449e28ca6700d0874bdb37750b54043064c0f45ea967f6e1b3a87" exitCode=0 Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.441768 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dc8sm" event={"ID":"92f44db3-78d7-4707-af34-daf9f3bbc0bf","Type":"ContainerDied","Data":"fff43a494e3449e28ca6700d0874bdb37750b54043064c0f45ea967f6e1b3a87"} Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.443920 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-cff6-account-create-update-ktnvw" event={"ID":"b72434a2-25c0-4fd4-89cf-eff7bee167c3","Type":"ContainerStarted","Data":"46a9a76f15cacb4a470e49f4c30581d530830b0fd8172437a64106eaad5727e9"} Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.527875 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-t56pc"] Feb 14 04:29:52 crc kubenswrapper[4867]: W0214 04:29:52.529211 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fef49b7_7486_40dc_aedc_9814adb071e2.slice/crio-045d88360f02bd01b9a0a10a071b2c33fedbb74ad62b8c840dfe74592b470dd8 WatchSource:0}: Error finding container 045d88360f02bd01b9a0a10a071b2c33fedbb74ad62b8c840dfe74592b470dd8: Status 404 returned error can't find the container with id 045d88360f02bd01b9a0a10a071b2c33fedbb74ad62b8c840dfe74592b470dd8 Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.565566 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.627879 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-qmj24"] Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.629499 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qmj24" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.653972 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-qmj24"] Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.728356 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/853d3739-366e-498f-ac28-6df19ee88dee-operator-scripts\") pod \"keystone-db-create-qmj24\" (UID: \"853d3739-366e-498f-ac28-6df19ee88dee\") " pod="openstack/keystone-db-create-qmj24" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.728641 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgmt4\" (UniqueName: \"kubernetes.io/projected/853d3739-366e-498f-ac28-6df19ee88dee-kube-api-access-wgmt4\") pod \"keystone-db-create-qmj24\" (UID: \"853d3739-366e-498f-ac28-6df19ee88dee\") " pod="openstack/keystone-db-create-qmj24" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.740063 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-brnhd"] Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.742000 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-brnhd" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.749792 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-aef7-account-create-update-w7xz9"] Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.752969 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-aef7-account-create-update-w7xz9" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.759027 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.765933 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-brnhd"] Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.773827 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-aef7-account-create-update-w7xz9"] Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.830004 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/853d3739-366e-498f-ac28-6df19ee88dee-operator-scripts\") pod \"keystone-db-create-qmj24\" (UID: \"853d3739-366e-498f-ac28-6df19ee88dee\") " pod="openstack/keystone-db-create-qmj24" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.830048 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62c2a1e-55e4-4b7d-90db-ab37eecdb659-operator-scripts\") pod \"placement-aef7-account-create-update-w7xz9\" (UID: \"e62c2a1e-55e4-4b7d-90db-ab37eecdb659\") " pod="openstack/placement-aef7-account-create-update-w7xz9" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.830141 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af1b76a6-cc66-4a23-893d-df38ba5aac38-operator-scripts\") pod \"placement-db-create-brnhd\" (UID: \"af1b76a6-cc66-4a23-893d-df38ba5aac38\") " pod="openstack/placement-db-create-brnhd" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.830188 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwv5c\" (UniqueName: \"kubernetes.io/projected/e62c2a1e-55e4-4b7d-90db-ab37eecdb659-kube-api-access-lwv5c\") pod \"placement-aef7-account-create-update-w7xz9\" (UID: \"e62c2a1e-55e4-4b7d-90db-ab37eecdb659\") " pod="openstack/placement-aef7-account-create-update-w7xz9" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.830282 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgmt4\" (UniqueName: \"kubernetes.io/projected/853d3739-366e-498f-ac28-6df19ee88dee-kube-api-access-wgmt4\") pod \"keystone-db-create-qmj24\" (UID: \"853d3739-366e-498f-ac28-6df19ee88dee\") " pod="openstack/keystone-db-create-qmj24" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.830347 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2vmp\" (UniqueName: \"kubernetes.io/projected/af1b76a6-cc66-4a23-893d-df38ba5aac38-kube-api-access-z2vmp\") pod \"placement-db-create-brnhd\" (UID: \"af1b76a6-cc66-4a23-893d-df38ba5aac38\") " pod="openstack/placement-db-create-brnhd" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.831222 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/853d3739-366e-498f-ac28-6df19ee88dee-operator-scripts\") pod \"keystone-db-create-qmj24\" (UID: \"853d3739-366e-498f-ac28-6df19ee88dee\") " pod="openstack/keystone-db-create-qmj24" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.845238 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-a782-account-create-update-dzhfz"] Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.847225 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a782-account-create-update-dzhfz" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.849100 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.851770 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgmt4\" (UniqueName: \"kubernetes.io/projected/853d3739-366e-498f-ac28-6df19ee88dee-kube-api-access-wgmt4\") pod \"keystone-db-create-qmj24\" (UID: \"853d3739-366e-498f-ac28-6df19ee88dee\") " pod="openstack/keystone-db-create-qmj24" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.876133 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a782-account-create-update-dzhfz"] Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.932537 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwv5c\" (UniqueName: \"kubernetes.io/projected/e62c2a1e-55e4-4b7d-90db-ab37eecdb659-kube-api-access-lwv5c\") pod \"placement-aef7-account-create-update-w7xz9\" (UID: \"e62c2a1e-55e4-4b7d-90db-ab37eecdb659\") " pod="openstack/placement-aef7-account-create-update-w7xz9" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.932614 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf9dm\" (UniqueName: \"kubernetes.io/projected/b10f828b-59d6-4eb2-8922-aec92f274280-kube-api-access-cf9dm\") pod \"keystone-a782-account-create-update-dzhfz\" (UID: \"b10f828b-59d6-4eb2-8922-aec92f274280\") " pod="openstack/keystone-a782-account-create-update-dzhfz" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.932662 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b10f828b-59d6-4eb2-8922-aec92f274280-operator-scripts\") pod \"keystone-a782-account-create-update-dzhfz\" (UID: \"b10f828b-59d6-4eb2-8922-aec92f274280\") " pod="openstack/keystone-a782-account-create-update-dzhfz" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.932720 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2vmp\" (UniqueName: \"kubernetes.io/projected/af1b76a6-cc66-4a23-893d-df38ba5aac38-kube-api-access-z2vmp\") pod \"placement-db-create-brnhd\" (UID: \"af1b76a6-cc66-4a23-893d-df38ba5aac38\") " pod="openstack/placement-db-create-brnhd" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.933115 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62c2a1e-55e4-4b7d-90db-ab37eecdb659-operator-scripts\") pod \"placement-aef7-account-create-update-w7xz9\" (UID: \"e62c2a1e-55e4-4b7d-90db-ab37eecdb659\") " pod="openstack/placement-aef7-account-create-update-w7xz9" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.933314 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af1b76a6-cc66-4a23-893d-df38ba5aac38-operator-scripts\") pod \"placement-db-create-brnhd\" (UID: \"af1b76a6-cc66-4a23-893d-df38ba5aac38\") " pod="openstack/placement-db-create-brnhd" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.934086 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62c2a1e-55e4-4b7d-90db-ab37eecdb659-operator-scripts\") pod \"placement-aef7-account-create-update-w7xz9\" (UID: \"e62c2a1e-55e4-4b7d-90db-ab37eecdb659\") " pod="openstack/placement-aef7-account-create-update-w7xz9" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.934103 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af1b76a6-cc66-4a23-893d-df38ba5aac38-operator-scripts\") pod \"placement-db-create-brnhd\" (UID: \"af1b76a6-cc66-4a23-893d-df38ba5aac38\") " pod="openstack/placement-db-create-brnhd" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.952783 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwv5c\" (UniqueName: \"kubernetes.io/projected/e62c2a1e-55e4-4b7d-90db-ab37eecdb659-kube-api-access-lwv5c\") pod \"placement-aef7-account-create-update-w7xz9\" (UID: \"e62c2a1e-55e4-4b7d-90db-ab37eecdb659\") " pod="openstack/placement-aef7-account-create-update-w7xz9" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.956378 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2vmp\" (UniqueName: \"kubernetes.io/projected/af1b76a6-cc66-4a23-893d-df38ba5aac38-kube-api-access-z2vmp\") pod \"placement-db-create-brnhd\" (UID: \"af1b76a6-cc66-4a23-893d-df38ba5aac38\") " pod="openstack/placement-db-create-brnhd" Feb 14 04:29:52 crc kubenswrapper[4867]: I0214 04:29:52.995452 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qmj24" Feb 14 04:29:53 crc kubenswrapper[4867]: I0214 04:29:53.008606 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deae29d8-abfa-4fe4-8314-b02cf70eb5be" path="/var/lib/kubelet/pods/deae29d8-abfa-4fe4-8314-b02cf70eb5be/volumes" Feb 14 04:29:53 crc kubenswrapper[4867]: I0214 04:29:53.034920 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b10f828b-59d6-4eb2-8922-aec92f274280-operator-scripts\") pod \"keystone-a782-account-create-update-dzhfz\" (UID: \"b10f828b-59d6-4eb2-8922-aec92f274280\") " pod="openstack/keystone-a782-account-create-update-dzhfz" Feb 14 04:29:53 crc kubenswrapper[4867]: I0214 04:29:53.035681 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf9dm\" (UniqueName: \"kubernetes.io/projected/b10f828b-59d6-4eb2-8922-aec92f274280-kube-api-access-cf9dm\") pod \"keystone-a782-account-create-update-dzhfz\" (UID: \"b10f828b-59d6-4eb2-8922-aec92f274280\") " pod="openstack/keystone-a782-account-create-update-dzhfz" Feb 14 04:29:53 crc kubenswrapper[4867]: I0214 04:29:53.035691 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b10f828b-59d6-4eb2-8922-aec92f274280-operator-scripts\") pod \"keystone-a782-account-create-update-dzhfz\" (UID: \"b10f828b-59d6-4eb2-8922-aec92f274280\") " pod="openstack/keystone-a782-account-create-update-dzhfz" Feb 14 04:29:53 crc kubenswrapper[4867]: I0214 04:29:53.058055 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf9dm\" (UniqueName: \"kubernetes.io/projected/b10f828b-59d6-4eb2-8922-aec92f274280-kube-api-access-cf9dm\") pod \"keystone-a782-account-create-update-dzhfz\" (UID: \"b10f828b-59d6-4eb2-8922-aec92f274280\") " pod="openstack/keystone-a782-account-create-update-dzhfz" Feb 14 04:29:53 crc kubenswrapper[4867]: I0214 04:29:53.118248 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-brnhd" Feb 14 04:29:53 crc kubenswrapper[4867]: I0214 04:29:53.139662 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-aef7-account-create-update-w7xz9" Feb 14 04:29:53 crc kubenswrapper[4867]: I0214 04:29:53.181935 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a782-account-create-update-dzhfz" Feb 14 04:29:53 crc kubenswrapper[4867]: I0214 04:29:53.462816 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-t56pc" event={"ID":"0fef49b7-7486-40dc-aedc-9814adb071e2","Type":"ContainerStarted","Data":"ae0a83f28bdc3a06d4663a0d9d8e67b0716eee94221bc552fd5d22ba9ecc6605"} Feb 14 04:29:53 crc kubenswrapper[4867]: I0214 04:29:53.463107 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-t56pc" event={"ID":"0fef49b7-7486-40dc-aedc-9814adb071e2","Type":"ContainerStarted","Data":"045d88360f02bd01b9a0a10a071b2c33fedbb74ad62b8c840dfe74592b470dd8"} Feb 14 04:29:53 crc kubenswrapper[4867]: I0214 04:29:53.493129 4867 generic.go:334] "Generic (PLEG): container finished" podID="b72434a2-25c0-4fd4-89cf-eff7bee167c3" containerID="63b1841b94ccfe878085e7aaa4ff2044786571fd3492e4ffbe7576e35506afb2" exitCode=0 Feb 14 04:29:53 crc kubenswrapper[4867]: I0214 04:29:53.493664 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-cff6-account-create-update-ktnvw" event={"ID":"b72434a2-25c0-4fd4-89cf-eff7bee167c3","Type":"ContainerDied","Data":"63b1841b94ccfe878085e7aaa4ff2044786571fd3492e4ffbe7576e35506afb2"} Feb 14 04:29:53 crc kubenswrapper[4867]: I0214 04:29:53.494355 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-t56pc" podStartSLOduration=2.4943317990000002 podStartE2EDuration="2.494331799s" podCreationTimestamp="2026-02-14 04:29:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:29:53.488926918 +0000 UTC m=+1225.569864232" watchObservedRunningTime="2026-02-14 04:29:53.494331799 +0000 UTC m=+1225.575269113" Feb 14 04:29:53 crc kubenswrapper[4867]: I0214 04:29:53.592047 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-qmj24"] Feb 14 04:29:53 crc kubenswrapper[4867]: I0214 04:29:53.801669 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-brnhd"] Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.074227 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.158554 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-aef7-account-create-update-w7xz9"] Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.181990 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a782-account-create-update-dzhfz"] Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.183110 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-swiftconf\") pod \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.183347 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92f44db3-78d7-4707-af34-daf9f3bbc0bf-scripts\") pod \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.183521 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-dispersionconf\") pod \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.183621 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/92f44db3-78d7-4707-af34-daf9f3bbc0bf-ring-data-devices\") pod \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.183732 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d9fk\" (UniqueName: \"kubernetes.io/projected/92f44db3-78d7-4707-af34-daf9f3bbc0bf-kube-api-access-8d9fk\") pod \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.183819 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-combined-ca-bundle\") pod \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.183948 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/92f44db3-78d7-4707-af34-daf9f3bbc0bf-etc-swift\") pod \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\" (UID: \"92f44db3-78d7-4707-af34-daf9f3bbc0bf\") " Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.185541 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92f44db3-78d7-4707-af34-daf9f3bbc0bf-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "92f44db3-78d7-4707-af34-daf9f3bbc0bf" (UID: "92f44db3-78d7-4707-af34-daf9f3bbc0bf"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.187564 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92f44db3-78d7-4707-af34-daf9f3bbc0bf-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "92f44db3-78d7-4707-af34-daf9f3bbc0bf" (UID: "92f44db3-78d7-4707-af34-daf9f3bbc0bf"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.200670 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-7klnf"] Feb 14 04:29:54 crc kubenswrapper[4867]: E0214 04:29:54.201668 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92f44db3-78d7-4707-af34-daf9f3bbc0bf" containerName="swift-ring-rebalance" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.201688 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="92f44db3-78d7-4707-af34-daf9f3bbc0bf" containerName="swift-ring-rebalance" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.203255 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="92f44db3-78d7-4707-af34-daf9f3bbc0bf" containerName="swift-ring-rebalance" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.203641 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "92f44db3-78d7-4707-af34-daf9f3bbc0bf" (UID: "92f44db3-78d7-4707-af34-daf9f3bbc0bf"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.204419 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-7klnf" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.212986 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-7klnf"] Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.235171 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92f44db3-78d7-4707-af34-daf9f3bbc0bf-kube-api-access-8d9fk" (OuterVolumeSpecName: "kube-api-access-8d9fk") pod "92f44db3-78d7-4707-af34-daf9f3bbc0bf" (UID: "92f44db3-78d7-4707-af34-daf9f3bbc0bf"). InnerVolumeSpecName "kube-api-access-8d9fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.290789 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa8913cb-b163-4973-b6e2-ac741177964e-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-7klnf\" (UID: \"fa8913cb-b163-4973-b6e2-ac741177964e\") " pod="openstack/mysqld-exporter-openstack-db-create-7klnf" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.290877 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbsfl\" (UniqueName: \"kubernetes.io/projected/fa8913cb-b163-4973-b6e2-ac741177964e-kube-api-access-cbsfl\") pod \"mysqld-exporter-openstack-db-create-7klnf\" (UID: \"fa8913cb-b163-4973-b6e2-ac741177964e\") " pod="openstack/mysqld-exporter-openstack-db-create-7klnf" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.290939 4867 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.290949 4867 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/92f44db3-78d7-4707-af34-daf9f3bbc0bf-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.290959 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d9fk\" (UniqueName: \"kubernetes.io/projected/92f44db3-78d7-4707-af34-daf9f3bbc0bf-kube-api-access-8d9fk\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.290969 4867 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/92f44db3-78d7-4707-af34-daf9f3bbc0bf-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.359576 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-4f85-account-create-update-7m6h2"] Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.361013 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-4f85-account-create-update-7m6h2" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.376262 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.392906 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa8913cb-b163-4973-b6e2-ac741177964e-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-7klnf\" (UID: \"fa8913cb-b163-4973-b6e2-ac741177964e\") " pod="openstack/mysqld-exporter-openstack-db-create-7klnf" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.393001 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbsfl\" (UniqueName: \"kubernetes.io/projected/fa8913cb-b163-4973-b6e2-ac741177964e-kube-api-access-cbsfl\") pod \"mysqld-exporter-openstack-db-create-7klnf\" (UID: \"fa8913cb-b163-4973-b6e2-ac741177964e\") " pod="openstack/mysqld-exporter-openstack-db-create-7klnf" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.394191 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa8913cb-b163-4973-b6e2-ac741177964e-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-7klnf\" (UID: \"fa8913cb-b163-4973-b6e2-ac741177964e\") " pod="openstack/mysqld-exporter-openstack-db-create-7klnf" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.408644 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92f44db3-78d7-4707-af34-daf9f3bbc0bf" (UID: "92f44db3-78d7-4707-af34-daf9f3bbc0bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.415697 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "92f44db3-78d7-4707-af34-daf9f3bbc0bf" (UID: "92f44db3-78d7-4707-af34-daf9f3bbc0bf"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.423854 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92f44db3-78d7-4707-af34-daf9f3bbc0bf-scripts" (OuterVolumeSpecName: "scripts") pod "92f44db3-78d7-4707-af34-daf9f3bbc0bf" (UID: "92f44db3-78d7-4707-af34-daf9f3bbc0bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.426200 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbsfl\" (UniqueName: \"kubernetes.io/projected/fa8913cb-b163-4973-b6e2-ac741177964e-kube-api-access-cbsfl\") pod \"mysqld-exporter-openstack-db-create-7klnf\" (UID: \"fa8913cb-b163-4973-b6e2-ac741177964e\") " pod="openstack/mysqld-exporter-openstack-db-create-7klnf" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.426265 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-4f85-account-create-update-7m6h2"] Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.494856 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mnvv\" (UniqueName: \"kubernetes.io/projected/1207dbcf-080a-40c2-a0cb-ab39e7225aaf-kube-api-access-7mnvv\") pod \"mysqld-exporter-4f85-account-create-update-7m6h2\" (UID: \"1207dbcf-080a-40c2-a0cb-ab39e7225aaf\") " pod="openstack/mysqld-exporter-4f85-account-create-update-7m6h2" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.494975 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1207dbcf-080a-40c2-a0cb-ab39e7225aaf-operator-scripts\") pod \"mysqld-exporter-4f85-account-create-update-7m6h2\" (UID: \"1207dbcf-080a-40c2-a0cb-ab39e7225aaf\") " pod="openstack/mysqld-exporter-4f85-account-create-update-7m6h2" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.495129 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/92f44db3-78d7-4707-af34-daf9f3bbc0bf-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.495140 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.495150 4867 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/92f44db3-78d7-4707-af34-daf9f3bbc0bf-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.555394 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-brnhd" event={"ID":"af1b76a6-cc66-4a23-893d-df38ba5aac38","Type":"ContainerStarted","Data":"3153c4a07960d41a74e24a7930f090f79335648803580b8746827c9d1b684552"} Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.582220 4867 generic.go:334] "Generic (PLEG): container finished" podID="0fef49b7-7486-40dc-aedc-9814adb071e2" containerID="ae0a83f28bdc3a06d4663a0d9d8e67b0716eee94221bc552fd5d22ba9ecc6605" exitCode=0 Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.582284 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-t56pc" event={"ID":"0fef49b7-7486-40dc-aedc-9814adb071e2","Type":"ContainerDied","Data":"ae0a83f28bdc3a06d4663a0d9d8e67b0716eee94221bc552fd5d22ba9ecc6605"} Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.598790 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mnvv\" (UniqueName: \"kubernetes.io/projected/1207dbcf-080a-40c2-a0cb-ab39e7225aaf-kube-api-access-7mnvv\") pod \"mysqld-exporter-4f85-account-create-update-7m6h2\" (UID: \"1207dbcf-080a-40c2-a0cb-ab39e7225aaf\") " pod="openstack/mysqld-exporter-4f85-account-create-update-7m6h2" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.598893 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1207dbcf-080a-40c2-a0cb-ab39e7225aaf-operator-scripts\") pod \"mysqld-exporter-4f85-account-create-update-7m6h2\" (UID: \"1207dbcf-080a-40c2-a0cb-ab39e7225aaf\") " pod="openstack/mysqld-exporter-4f85-account-create-update-7m6h2" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.601110 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1207dbcf-080a-40c2-a0cb-ab39e7225aaf-operator-scripts\") pod \"mysqld-exporter-4f85-account-create-update-7m6h2\" (UID: \"1207dbcf-080a-40c2-a0cb-ab39e7225aaf\") " pod="openstack/mysqld-exporter-4f85-account-create-update-7m6h2" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.614794 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qmj24" event={"ID":"853d3739-366e-498f-ac28-6df19ee88dee","Type":"ContainerStarted","Data":"4f99901f0da4b1da0863796edd2dde44662d1bb2b2807e64f939fdf575d0e6af"} Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.614847 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qmj24" event={"ID":"853d3739-366e-498f-ac28-6df19ee88dee","Type":"ContainerStarted","Data":"1449dd6ba694df817431a1fd128385596c23c85bf11d7b4f85aa2c4a119c2a6e"} Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.627947 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dc8sm" event={"ID":"92f44db3-78d7-4707-af34-daf9f3bbc0bf","Type":"ContainerDied","Data":"2560c4e53d69d39e5b6393b89e72bba71dd48e723971acf1a56bff692ff3065d"} Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.627990 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2560c4e53d69d39e5b6393b89e72bba71dd48e723971acf1a56bff692ff3065d" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.628055 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dc8sm" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.634151 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mnvv\" (UniqueName: \"kubernetes.io/projected/1207dbcf-080a-40c2-a0cb-ab39e7225aaf-kube-api-access-7mnvv\") pod \"mysqld-exporter-4f85-account-create-update-7m6h2\" (UID: \"1207dbcf-080a-40c2-a0cb-ab39e7225aaf\") " pod="openstack/mysqld-exporter-4f85-account-create-update-7m6h2" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.634362 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a782-account-create-update-dzhfz" event={"ID":"b10f828b-59d6-4eb2-8922-aec92f274280","Type":"ContainerStarted","Data":"8b296b5d58f442c00028c4fdc60d37ab84f498118087ec78a227389a7fbdf5d6"} Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.634619 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-7klnf" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.638806 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-aef7-account-create-update-w7xz9" event={"ID":"e62c2a1e-55e4-4b7d-90db-ab37eecdb659","Type":"ContainerStarted","Data":"003208322c8aa81d26f8c4c81ed09f0fbc97445ca54ffd023f4cbeef6d71c09f"} Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.655620 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-qmj24" podStartSLOduration=2.655586883 podStartE2EDuration="2.655586883s" podCreationTimestamp="2026-02-14 04:29:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:29:54.652370559 +0000 UTC m=+1226.733307873" watchObservedRunningTime="2026-02-14 04:29:54.655586883 +0000 UTC m=+1226.736524207" Feb 14 04:29:54 crc kubenswrapper[4867]: I0214 04:29:54.659544 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-4f85-account-create-update-7m6h2" Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.120187 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-cff6-account-create-update-ktnvw" Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.217736 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b72434a2-25c0-4fd4-89cf-eff7bee167c3-operator-scripts\") pod \"b72434a2-25c0-4fd4-89cf-eff7bee167c3\" (UID: \"b72434a2-25c0-4fd4-89cf-eff7bee167c3\") " Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.217876 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dwgw\" (UniqueName: \"kubernetes.io/projected/b72434a2-25c0-4fd4-89cf-eff7bee167c3-kube-api-access-4dwgw\") pod \"b72434a2-25c0-4fd4-89cf-eff7bee167c3\" (UID: \"b72434a2-25c0-4fd4-89cf-eff7bee167c3\") " Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.218566 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b72434a2-25c0-4fd4-89cf-eff7bee167c3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b72434a2-25c0-4fd4-89cf-eff7bee167c3" (UID: "b72434a2-25c0-4fd4-89cf-eff7bee167c3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.251460 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b72434a2-25c0-4fd4-89cf-eff7bee167c3-kube-api-access-4dwgw" (OuterVolumeSpecName: "kube-api-access-4dwgw") pod "b72434a2-25c0-4fd4-89cf-eff7bee167c3" (UID: "b72434a2-25c0-4fd4-89cf-eff7bee167c3"). InnerVolumeSpecName "kube-api-access-4dwgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.320056 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b72434a2-25c0-4fd4-89cf-eff7bee167c3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.320096 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dwgw\" (UniqueName: \"kubernetes.io/projected/b72434a2-25c0-4fd4-89cf-eff7bee167c3-kube-api-access-4dwgw\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.343613 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-4f85-account-create-update-7m6h2"] Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.351357 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-7klnf"] Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.651688 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-cff6-account-create-update-ktnvw" Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.651736 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-cff6-account-create-update-ktnvw" event={"ID":"b72434a2-25c0-4fd4-89cf-eff7bee167c3","Type":"ContainerDied","Data":"46a9a76f15cacb4a470e49f4c30581d530830b0fd8172437a64106eaad5727e9"} Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.652128 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46a9a76f15cacb4a470e49f4c30581d530830b0fd8172437a64106eaad5727e9" Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.654498 4867 generic.go:334] "Generic (PLEG): container finished" podID="853d3739-366e-498f-ac28-6df19ee88dee" containerID="4f99901f0da4b1da0863796edd2dde44662d1bb2b2807e64f939fdf575d0e6af" exitCode=0 Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.654595 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qmj24" event={"ID":"853d3739-366e-498f-ac28-6df19ee88dee","Type":"ContainerDied","Data":"4f99901f0da4b1da0863796edd2dde44662d1bb2b2807e64f939fdf575d0e6af"} Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.659349 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c755009c-2bb6-4f8f-9b53-460a0e4c9447","Type":"ContainerStarted","Data":"c62b1e6f71da03f759075e45d595dab84ceabe23bcfb61adf4ba71561bb4ec1e"} Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.661968 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-7klnf" event={"ID":"fa8913cb-b163-4973-b6e2-ac741177964e","Type":"ContainerStarted","Data":"b15af05372af83870ac8348103bb677c8c101f4ec816b4f3aac84c848cfde8bf"} Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.664863 4867 generic.go:334] "Generic (PLEG): container finished" podID="b10f828b-59d6-4eb2-8922-aec92f274280" containerID="4331549532fda4f50fc6d3ddd019e8a773925579f6102f8ec4140112305629a4" exitCode=0 Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.664923 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a782-account-create-update-dzhfz" event={"ID":"b10f828b-59d6-4eb2-8922-aec92f274280","Type":"ContainerDied","Data":"4331549532fda4f50fc6d3ddd019e8a773925579f6102f8ec4140112305629a4"} Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.666951 4867 generic.go:334] "Generic (PLEG): container finished" podID="e62c2a1e-55e4-4b7d-90db-ab37eecdb659" containerID="659356ffd1920059def60984a1f291aad46ef6d15393b55c49987a54a05704a7" exitCode=0 Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.666997 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-aef7-account-create-update-w7xz9" event={"ID":"e62c2a1e-55e4-4b7d-90db-ab37eecdb659","Type":"ContainerDied","Data":"659356ffd1920059def60984a1f291aad46ef6d15393b55c49987a54a05704a7"} Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.669385 4867 generic.go:334] "Generic (PLEG): container finished" podID="af1b76a6-cc66-4a23-893d-df38ba5aac38" containerID="6169e5fdf0e74fe086570773b95de46198a0244319d8d869f06e9d58ae4d08cb" exitCode=0 Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.669491 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-brnhd" event={"ID":"af1b76a6-cc66-4a23-893d-df38ba5aac38","Type":"ContainerDied","Data":"6169e5fdf0e74fe086570773b95de46198a0244319d8d869f06e9d58ae4d08cb"} Feb 14 04:29:55 crc kubenswrapper[4867]: I0214 04:29:55.671005 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-4f85-account-create-update-7m6h2" event={"ID":"1207dbcf-080a-40c2-a0cb-ab39e7225aaf","Type":"ContainerStarted","Data":"e85e63230db29e0559e88714471c8d9ce8ccc4c7c8f8d4e8ba69289318b4674c"} Feb 14 04:29:56 crc kubenswrapper[4867]: I0214 04:29:56.234049 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-t56pc" Feb 14 04:29:56 crc kubenswrapper[4867]: I0214 04:29:56.349808 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fef49b7-7486-40dc-aedc-9814adb071e2-operator-scripts\") pod \"0fef49b7-7486-40dc-aedc-9814adb071e2\" (UID: \"0fef49b7-7486-40dc-aedc-9814adb071e2\") " Feb 14 04:29:56 crc kubenswrapper[4867]: I0214 04:29:56.349885 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fxhr\" (UniqueName: \"kubernetes.io/projected/0fef49b7-7486-40dc-aedc-9814adb071e2-kube-api-access-9fxhr\") pod \"0fef49b7-7486-40dc-aedc-9814adb071e2\" (UID: \"0fef49b7-7486-40dc-aedc-9814adb071e2\") " Feb 14 04:29:56 crc kubenswrapper[4867]: I0214 04:29:56.350590 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fef49b7-7486-40dc-aedc-9814adb071e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0fef49b7-7486-40dc-aedc-9814adb071e2" (UID: "0fef49b7-7486-40dc-aedc-9814adb071e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:56 crc kubenswrapper[4867]: I0214 04:29:56.370381 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fef49b7-7486-40dc-aedc-9814adb071e2-kube-api-access-9fxhr" (OuterVolumeSpecName: "kube-api-access-9fxhr") pod "0fef49b7-7486-40dc-aedc-9814adb071e2" (UID: "0fef49b7-7486-40dc-aedc-9814adb071e2"). InnerVolumeSpecName "kube-api-access-9fxhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:29:56 crc kubenswrapper[4867]: I0214 04:29:56.452835 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fef49b7-7486-40dc-aedc-9814adb071e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:56 crc kubenswrapper[4867]: I0214 04:29:56.452881 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fxhr\" (UniqueName: \"kubernetes.io/projected/0fef49b7-7486-40dc-aedc-9814adb071e2-kube-api-access-9fxhr\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:56 crc kubenswrapper[4867]: I0214 04:29:56.682995 4867 generic.go:334] "Generic (PLEG): container finished" podID="1207dbcf-080a-40c2-a0cb-ab39e7225aaf" containerID="f4258135bf11c6ed1dd99f5c1f581fcb97da6e22ed3370067c3b4edacd5e6962" exitCode=0 Feb 14 04:29:56 crc kubenswrapper[4867]: I0214 04:29:56.683111 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-4f85-account-create-update-7m6h2" event={"ID":"1207dbcf-080a-40c2-a0cb-ab39e7225aaf","Type":"ContainerDied","Data":"f4258135bf11c6ed1dd99f5c1f581fcb97da6e22ed3370067c3b4edacd5e6962"} Feb 14 04:29:56 crc kubenswrapper[4867]: I0214 04:29:56.684918 4867 generic.go:334] "Generic (PLEG): container finished" podID="fa8913cb-b163-4973-b6e2-ac741177964e" containerID="41305e93b907718ed0332e27cd0c47623d93ba3f8546dbde9032dfe08f5e2a6c" exitCode=0 Feb 14 04:29:56 crc kubenswrapper[4867]: I0214 04:29:56.685029 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-7klnf" event={"ID":"fa8913cb-b163-4973-b6e2-ac741177964e","Type":"ContainerDied","Data":"41305e93b907718ed0332e27cd0c47623d93ba3f8546dbde9032dfe08f5e2a6c"} Feb 14 04:29:56 crc kubenswrapper[4867]: I0214 04:29:56.687377 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-t56pc" event={"ID":"0fef49b7-7486-40dc-aedc-9814adb071e2","Type":"ContainerDied","Data":"045d88360f02bd01b9a0a10a071b2c33fedbb74ad62b8c840dfe74592b470dd8"} Feb 14 04:29:56 crc kubenswrapper[4867]: I0214 04:29:56.687415 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="045d88360f02bd01b9a0a10a071b2c33fedbb74ad62b8c840dfe74592b470dd8" Feb 14 04:29:56 crc kubenswrapper[4867]: I0214 04:29:56.687631 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-t56pc" Feb 14 04:29:56 crc kubenswrapper[4867]: E0214 04:29:56.765718 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fef49b7_7486_40dc_aedc_9814adb071e2.slice\": RecentStats: unable to find data in memory cache]" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.238199 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-aef7-account-create-update-w7xz9" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.414010 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62c2a1e-55e4-4b7d-90db-ab37eecdb659-operator-scripts\") pod \"e62c2a1e-55e4-4b7d-90db-ab37eecdb659\" (UID: \"e62c2a1e-55e4-4b7d-90db-ab37eecdb659\") " Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.415417 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e62c2a1e-55e4-4b7d-90db-ab37eecdb659-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e62c2a1e-55e4-4b7d-90db-ab37eecdb659" (UID: "e62c2a1e-55e4-4b7d-90db-ab37eecdb659"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.416770 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwv5c\" (UniqueName: \"kubernetes.io/projected/e62c2a1e-55e4-4b7d-90db-ab37eecdb659-kube-api-access-lwv5c\") pod \"e62c2a1e-55e4-4b7d-90db-ab37eecdb659\" (UID: \"e62c2a1e-55e4-4b7d-90db-ab37eecdb659\") " Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.417355 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62c2a1e-55e4-4b7d-90db-ab37eecdb659-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.435553 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e62c2a1e-55e4-4b7d-90db-ab37eecdb659-kube-api-access-lwv5c" (OuterVolumeSpecName: "kube-api-access-lwv5c") pod "e62c2a1e-55e4-4b7d-90db-ab37eecdb659" (UID: "e62c2a1e-55e4-4b7d-90db-ab37eecdb659"). InnerVolumeSpecName "kube-api-access-lwv5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.519930 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwv5c\" (UniqueName: \"kubernetes.io/projected/e62c2a1e-55e4-4b7d-90db-ab37eecdb659-kube-api-access-lwv5c\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.564002 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a782-account-create-update-dzhfz" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.670086 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-brnhd" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.680345 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qmj24" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.705650 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qmj24" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.705806 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qmj24" event={"ID":"853d3739-366e-498f-ac28-6df19ee88dee","Type":"ContainerDied","Data":"1449dd6ba694df817431a1fd128385596c23c85bf11d7b4f85aa2c4a119c2a6e"} Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.705939 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1449dd6ba694df817431a1fd128385596c23c85bf11d7b4f85aa2c4a119c2a6e" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.707571 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a782-account-create-update-dzhfz" event={"ID":"b10f828b-59d6-4eb2-8922-aec92f274280","Type":"ContainerDied","Data":"8b296b5d58f442c00028c4fdc60d37ab84f498118087ec78a227389a7fbdf5d6"} Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.707594 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b296b5d58f442c00028c4fdc60d37ab84f498118087ec78a227389a7fbdf5d6" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.707635 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a782-account-create-update-dzhfz" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.712104 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-aef7-account-create-update-w7xz9" event={"ID":"e62c2a1e-55e4-4b7d-90db-ab37eecdb659","Type":"ContainerDied","Data":"003208322c8aa81d26f8c4c81ed09f0fbc97445ca54ffd023f4cbeef6d71c09f"} Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.712418 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="003208322c8aa81d26f8c4c81ed09f0fbc97445ca54ffd023f4cbeef6d71c09f" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.712472 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-aef7-account-create-update-w7xz9" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.727841 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf9dm\" (UniqueName: \"kubernetes.io/projected/b10f828b-59d6-4eb2-8922-aec92f274280-kube-api-access-cf9dm\") pod \"b10f828b-59d6-4eb2-8922-aec92f274280\" (UID: \"b10f828b-59d6-4eb2-8922-aec92f274280\") " Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.727921 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b10f828b-59d6-4eb2-8922-aec92f274280-operator-scripts\") pod \"b10f828b-59d6-4eb2-8922-aec92f274280\" (UID: \"b10f828b-59d6-4eb2-8922-aec92f274280\") " Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.729118 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b10f828b-59d6-4eb2-8922-aec92f274280-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b10f828b-59d6-4eb2-8922-aec92f274280" (UID: "b10f828b-59d6-4eb2-8922-aec92f274280"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.745899 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b10f828b-59d6-4eb2-8922-aec92f274280-kube-api-access-cf9dm" (OuterVolumeSpecName: "kube-api-access-cf9dm") pod "b10f828b-59d6-4eb2-8922-aec92f274280" (UID: "b10f828b-59d6-4eb2-8922-aec92f274280"). InnerVolumeSpecName "kube-api-access-cf9dm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.774487 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-brnhd" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.774688 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-brnhd" event={"ID":"af1b76a6-cc66-4a23-893d-df38ba5aac38","Type":"ContainerDied","Data":"3153c4a07960d41a74e24a7930f090f79335648803580b8746827c9d1b684552"} Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.774742 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3153c4a07960d41a74e24a7930f090f79335648803580b8746827c9d1b684552" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.831730 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af1b76a6-cc66-4a23-893d-df38ba5aac38-operator-scripts\") pod \"af1b76a6-cc66-4a23-893d-df38ba5aac38\" (UID: \"af1b76a6-cc66-4a23-893d-df38ba5aac38\") " Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.831864 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/853d3739-366e-498f-ac28-6df19ee88dee-operator-scripts\") pod \"853d3739-366e-498f-ac28-6df19ee88dee\" (UID: \"853d3739-366e-498f-ac28-6df19ee88dee\") " Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.831893 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgmt4\" (UniqueName: \"kubernetes.io/projected/853d3739-366e-498f-ac28-6df19ee88dee-kube-api-access-wgmt4\") pod \"853d3739-366e-498f-ac28-6df19ee88dee\" (UID: \"853d3739-366e-498f-ac28-6df19ee88dee\") " Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.831933 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2vmp\" (UniqueName: \"kubernetes.io/projected/af1b76a6-cc66-4a23-893d-df38ba5aac38-kube-api-access-z2vmp\") pod \"af1b76a6-cc66-4a23-893d-df38ba5aac38\" (UID: \"af1b76a6-cc66-4a23-893d-df38ba5aac38\") " Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.832427 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cf9dm\" (UniqueName: \"kubernetes.io/projected/b10f828b-59d6-4eb2-8922-aec92f274280-kube-api-access-cf9dm\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.832440 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b10f828b-59d6-4eb2-8922-aec92f274280-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.841250 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/853d3739-366e-498f-ac28-6df19ee88dee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "853d3739-366e-498f-ac28-6df19ee88dee" (UID: "853d3739-366e-498f-ac28-6df19ee88dee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.841660 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af1b76a6-cc66-4a23-893d-df38ba5aac38-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "af1b76a6-cc66-4a23-893d-df38ba5aac38" (UID: "af1b76a6-cc66-4a23-893d-df38ba5aac38"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.852680 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/853d3739-366e-498f-ac28-6df19ee88dee-kube-api-access-wgmt4" (OuterVolumeSpecName: "kube-api-access-wgmt4") pod "853d3739-366e-498f-ac28-6df19ee88dee" (UID: "853d3739-366e-498f-ac28-6df19ee88dee"). InnerVolumeSpecName "kube-api-access-wgmt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.869808 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af1b76a6-cc66-4a23-893d-df38ba5aac38-kube-api-access-z2vmp" (OuterVolumeSpecName: "kube-api-access-z2vmp") pod "af1b76a6-cc66-4a23-893d-df38ba5aac38" (UID: "af1b76a6-cc66-4a23-893d-df38ba5aac38"). InnerVolumeSpecName "kube-api-access-z2vmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.923011 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-s7x2m"] Feb 14 04:29:57 crc kubenswrapper[4867]: E0214 04:29:57.923763 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e62c2a1e-55e4-4b7d-90db-ab37eecdb659" containerName="mariadb-account-create-update" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.923901 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e62c2a1e-55e4-4b7d-90db-ab37eecdb659" containerName="mariadb-account-create-update" Feb 14 04:29:57 crc kubenswrapper[4867]: E0214 04:29:57.923987 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af1b76a6-cc66-4a23-893d-df38ba5aac38" containerName="mariadb-database-create" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.924048 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="af1b76a6-cc66-4a23-893d-df38ba5aac38" containerName="mariadb-database-create" Feb 14 04:29:57 crc kubenswrapper[4867]: E0214 04:29:57.924106 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b72434a2-25c0-4fd4-89cf-eff7bee167c3" containerName="mariadb-account-create-update" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.924156 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b72434a2-25c0-4fd4-89cf-eff7bee167c3" containerName="mariadb-account-create-update" Feb 14 04:29:57 crc kubenswrapper[4867]: E0214 04:29:57.924206 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b10f828b-59d6-4eb2-8922-aec92f274280" containerName="mariadb-account-create-update" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.924253 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b10f828b-59d6-4eb2-8922-aec92f274280" containerName="mariadb-account-create-update" Feb 14 04:29:57 crc kubenswrapper[4867]: E0214 04:29:57.924317 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fef49b7-7486-40dc-aedc-9814adb071e2" containerName="mariadb-database-create" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.924383 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fef49b7-7486-40dc-aedc-9814adb071e2" containerName="mariadb-database-create" Feb 14 04:29:57 crc kubenswrapper[4867]: E0214 04:29:57.924436 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="853d3739-366e-498f-ac28-6df19ee88dee" containerName="mariadb-database-create" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.924494 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="853d3739-366e-498f-ac28-6df19ee88dee" containerName="mariadb-database-create" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.924829 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="af1b76a6-cc66-4a23-893d-df38ba5aac38" containerName="mariadb-database-create" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.924902 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e62c2a1e-55e4-4b7d-90db-ab37eecdb659" containerName="mariadb-account-create-update" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.924970 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="853d3739-366e-498f-ac28-6df19ee88dee" containerName="mariadb-database-create" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.925037 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b72434a2-25c0-4fd4-89cf-eff7bee167c3" containerName="mariadb-account-create-update" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.925178 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b10f828b-59d6-4eb2-8922-aec92f274280" containerName="mariadb-account-create-update" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.925256 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fef49b7-7486-40dc-aedc-9814adb071e2" containerName="mariadb-database-create" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.926242 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-s7x2m" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.932913 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.937016 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/853d3739-366e-498f-ac28-6df19ee88dee-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.937053 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgmt4\" (UniqueName: \"kubernetes.io/projected/853d3739-366e-498f-ac28-6df19ee88dee-kube-api-access-wgmt4\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.937065 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2vmp\" (UniqueName: \"kubernetes.io/projected/af1b76a6-cc66-4a23-893d-df38ba5aac38-kube-api-access-z2vmp\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.937076 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af1b76a6-cc66-4a23-893d-df38ba5aac38-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:57 crc kubenswrapper[4867]: I0214 04:29:57.971578 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-s7x2m"] Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.047879 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69e82314-4716-4d79-b6bf-777f09ee83f7-operator-scripts\") pod \"root-account-create-update-s7x2m\" (UID: \"69e82314-4716-4d79-b6bf-777f09ee83f7\") " pod="openstack/root-account-create-update-s7x2m" Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.048041 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gg7t\" (UniqueName: \"kubernetes.io/projected/69e82314-4716-4d79-b6bf-777f09ee83f7-kube-api-access-4gg7t\") pod \"root-account-create-update-s7x2m\" (UID: \"69e82314-4716-4d79-b6bf-777f09ee83f7\") " pod="openstack/root-account-create-update-s7x2m" Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.157251 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69e82314-4716-4d79-b6bf-777f09ee83f7-operator-scripts\") pod \"root-account-create-update-s7x2m\" (UID: \"69e82314-4716-4d79-b6bf-777f09ee83f7\") " pod="openstack/root-account-create-update-s7x2m" Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.157441 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gg7t\" (UniqueName: \"kubernetes.io/projected/69e82314-4716-4d79-b6bf-777f09ee83f7-kube-api-access-4gg7t\") pod \"root-account-create-update-s7x2m\" (UID: \"69e82314-4716-4d79-b6bf-777f09ee83f7\") " pod="openstack/root-account-create-update-s7x2m" Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.159287 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69e82314-4716-4d79-b6bf-777f09ee83f7-operator-scripts\") pod \"root-account-create-update-s7x2m\" (UID: \"69e82314-4716-4d79-b6bf-777f09ee83f7\") " pod="openstack/root-account-create-update-s7x2m" Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.208074 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gg7t\" (UniqueName: \"kubernetes.io/projected/69e82314-4716-4d79-b6bf-777f09ee83f7-kube-api-access-4gg7t\") pod \"root-account-create-update-s7x2m\" (UID: \"69e82314-4716-4d79-b6bf-777f09ee83f7\") " pod="openstack/root-account-create-update-s7x2m" Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.255120 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-s7x2m" Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.348763 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.624116 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-4f85-account-create-update-7m6h2" Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.771522 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mnvv\" (UniqueName: \"kubernetes.io/projected/1207dbcf-080a-40c2-a0cb-ab39e7225aaf-kube-api-access-7mnvv\") pod \"1207dbcf-080a-40c2-a0cb-ab39e7225aaf\" (UID: \"1207dbcf-080a-40c2-a0cb-ab39e7225aaf\") " Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.771644 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1207dbcf-080a-40c2-a0cb-ab39e7225aaf-operator-scripts\") pod \"1207dbcf-080a-40c2-a0cb-ab39e7225aaf\" (UID: \"1207dbcf-080a-40c2-a0cb-ab39e7225aaf\") " Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.772465 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1207dbcf-080a-40c2-a0cb-ab39e7225aaf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1207dbcf-080a-40c2-a0cb-ab39e7225aaf" (UID: "1207dbcf-080a-40c2-a0cb-ab39e7225aaf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.775821 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1207dbcf-080a-40c2-a0cb-ab39e7225aaf-kube-api-access-7mnvv" (OuterVolumeSpecName: "kube-api-access-7mnvv") pod "1207dbcf-080a-40c2-a0cb-ab39e7225aaf" (UID: "1207dbcf-080a-40c2-a0cb-ab39e7225aaf"). InnerVolumeSpecName "kube-api-access-7mnvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.802565 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-4f85-account-create-update-7m6h2" event={"ID":"1207dbcf-080a-40c2-a0cb-ab39e7225aaf","Type":"ContainerDied","Data":"e85e63230db29e0559e88714471c8d9ce8ccc4c7c8f8d4e8ba69289318b4674c"} Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.802615 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e85e63230db29e0559e88714471c8d9ce8ccc4c7c8f8d4e8ba69289318b4674c" Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.802625 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-4f85-account-create-update-7m6h2" Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.874380 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mnvv\" (UniqueName: \"kubernetes.io/projected/1207dbcf-080a-40c2-a0cb-ab39e7225aaf-kube-api-access-7mnvv\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.874422 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1207dbcf-080a-40c2-a0cb-ab39e7225aaf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:58 crc kubenswrapper[4867]: I0214 04:29:58.924104 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-s7x2m"] Feb 14 04:29:59 crc kubenswrapper[4867]: W0214 04:29:59.600905 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69e82314_4716_4d79_b6bf_777f09ee83f7.slice/crio-90a619509978f686be0c8500e2ce1d1e1d540d50a43739ab895b3767799dad1c WatchSource:0}: Error finding container 90a619509978f686be0c8500e2ce1d1e1d540d50a43739ab895b3767799dad1c: Status 404 returned error can't find the container with id 90a619509978f686be0c8500e2ce1d1e1d540d50a43739ab895b3767799dad1c Feb 14 04:29:59 crc kubenswrapper[4867]: I0214 04:29:59.733344 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-7klnf" Feb 14 04:29:59 crc kubenswrapper[4867]: I0214 04:29:59.822373 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-7klnf" event={"ID":"fa8913cb-b163-4973-b6e2-ac741177964e","Type":"ContainerDied","Data":"b15af05372af83870ac8348103bb677c8c101f4ec816b4f3aac84c848cfde8bf"} Feb 14 04:29:59 crc kubenswrapper[4867]: I0214 04:29:59.822415 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b15af05372af83870ac8348103bb677c8c101f4ec816b4f3aac84c848cfde8bf" Feb 14 04:29:59 crc kubenswrapper[4867]: I0214 04:29:59.822480 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-7klnf" Feb 14 04:29:59 crc kubenswrapper[4867]: I0214 04:29:59.824882 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-s7x2m" event={"ID":"69e82314-4716-4d79-b6bf-777f09ee83f7","Type":"ContainerStarted","Data":"90a619509978f686be0c8500e2ce1d1e1d540d50a43739ab895b3767799dad1c"} Feb 14 04:29:59 crc kubenswrapper[4867]: I0214 04:29:59.893653 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbsfl\" (UniqueName: \"kubernetes.io/projected/fa8913cb-b163-4973-b6e2-ac741177964e-kube-api-access-cbsfl\") pod \"fa8913cb-b163-4973-b6e2-ac741177964e\" (UID: \"fa8913cb-b163-4973-b6e2-ac741177964e\") " Feb 14 04:29:59 crc kubenswrapper[4867]: I0214 04:29:59.893976 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa8913cb-b163-4973-b6e2-ac741177964e-operator-scripts\") pod \"fa8913cb-b163-4973-b6e2-ac741177964e\" (UID: \"fa8913cb-b163-4973-b6e2-ac741177964e\") " Feb 14 04:29:59 crc kubenswrapper[4867]: I0214 04:29:59.894684 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa8913cb-b163-4973-b6e2-ac741177964e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fa8913cb-b163-4973-b6e2-ac741177964e" (UID: "fa8913cb-b163-4973-b6e2-ac741177964e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:29:59 crc kubenswrapper[4867]: I0214 04:29:59.899565 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa8913cb-b163-4973-b6e2-ac741177964e-kube-api-access-cbsfl" (OuterVolumeSpecName: "kube-api-access-cbsfl") pod "fa8913cb-b163-4973-b6e2-ac741177964e" (UID: "fa8913cb-b163-4973-b6e2-ac741177964e"). InnerVolumeSpecName "kube-api-access-cbsfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:29:59 crc kubenswrapper[4867]: I0214 04:29:59.996723 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbsfl\" (UniqueName: \"kubernetes.io/projected/fa8913cb-b163-4973-b6e2-ac741177964e-kube-api-access-cbsfl\") on node \"crc\" DevicePath \"\"" Feb 14 04:29:59 crc kubenswrapper[4867]: I0214 04:29:59.996770 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa8913cb-b163-4973-b6e2-ac741177964e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.136146 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx"] Feb 14 04:30:00 crc kubenswrapper[4867]: E0214 04:30:00.136984 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1207dbcf-080a-40c2-a0cb-ab39e7225aaf" containerName="mariadb-account-create-update" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.137080 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1207dbcf-080a-40c2-a0cb-ab39e7225aaf" containerName="mariadb-account-create-update" Feb 14 04:30:00 crc kubenswrapper[4867]: E0214 04:30:00.137177 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa8913cb-b163-4973-b6e2-ac741177964e" containerName="mariadb-database-create" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.137286 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa8913cb-b163-4973-b6e2-ac741177964e" containerName="mariadb-database-create" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.137590 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="1207dbcf-080a-40c2-a0cb-ab39e7225aaf" containerName="mariadb-account-create-update" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.137692 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa8913cb-b163-4973-b6e2-ac741177964e" containerName="mariadb-database-create" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.139034 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.141127 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.141243 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.159926 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx"] Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.303009 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dxx2\" (UniqueName: \"kubernetes.io/projected/f7c88887-cc0d-4b61-9ccc-e5583c27322f-kube-api-access-4dxx2\") pod \"collect-profiles-29517390-kwnnx\" (UID: \"f7c88887-cc0d-4b61-9ccc-e5583c27322f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.303212 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7c88887-cc0d-4b61-9ccc-e5583c27322f-secret-volume\") pod \"collect-profiles-29517390-kwnnx\" (UID: \"f7c88887-cc0d-4b61-9ccc-e5583c27322f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.303356 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7c88887-cc0d-4b61-9ccc-e5583c27322f-config-volume\") pod \"collect-profiles-29517390-kwnnx\" (UID: \"f7c88887-cc0d-4b61-9ccc-e5583c27322f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.405121 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dxx2\" (UniqueName: \"kubernetes.io/projected/f7c88887-cc0d-4b61-9ccc-e5583c27322f-kube-api-access-4dxx2\") pod \"collect-profiles-29517390-kwnnx\" (UID: \"f7c88887-cc0d-4b61-9ccc-e5583c27322f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.405249 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7c88887-cc0d-4b61-9ccc-e5583c27322f-secret-volume\") pod \"collect-profiles-29517390-kwnnx\" (UID: \"f7c88887-cc0d-4b61-9ccc-e5583c27322f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.405307 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7c88887-cc0d-4b61-9ccc-e5583c27322f-config-volume\") pod \"collect-profiles-29517390-kwnnx\" (UID: \"f7c88887-cc0d-4b61-9ccc-e5583c27322f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.406736 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7c88887-cc0d-4b61-9ccc-e5583c27322f-config-volume\") pod \"collect-profiles-29517390-kwnnx\" (UID: \"f7c88887-cc0d-4b61-9ccc-e5583c27322f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.410650 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7c88887-cc0d-4b61-9ccc-e5583c27322f-secret-volume\") pod \"collect-profiles-29517390-kwnnx\" (UID: \"f7c88887-cc0d-4b61-9ccc-e5583c27322f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.426565 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dxx2\" (UniqueName: \"kubernetes.io/projected/f7c88887-cc0d-4b61-9ccc-e5583c27322f-kube-api-access-4dxx2\") pod \"collect-profiles-29517390-kwnnx\" (UID: \"f7c88887-cc0d-4b61-9ccc-e5583c27322f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.462267 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx" Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.835401 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c755009c-2bb6-4f8f-9b53-460a0e4c9447","Type":"ContainerStarted","Data":"ac3d49c697d1a12bf76bc8aaf7a8fbec4fa259f04adb512889e6f0cf63f6e93d"} Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.836666 4867 generic.go:334] "Generic (PLEG): container finished" podID="69e82314-4716-4d79-b6bf-777f09ee83f7" containerID="f68abce2a11886ea053ab13b7ebbe72ba1f8d7abcfad4ba7b26252a8c0000f25" exitCode=0 Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.836716 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-s7x2m" event={"ID":"69e82314-4716-4d79-b6bf-777f09ee83f7","Type":"ContainerDied","Data":"f68abce2a11886ea053ab13b7ebbe72ba1f8d7abcfad4ba7b26252a8c0000f25"} Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.865406 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=33.664804234 podStartE2EDuration="1m6.865385879s" podCreationTimestamp="2026-02-14 04:28:54 +0000 UTC" firstStartedPulling="2026-02-14 04:29:26.500245996 +0000 UTC m=+1198.581183310" lastFinishedPulling="2026-02-14 04:29:59.700827641 +0000 UTC m=+1231.781764955" observedRunningTime="2026-02-14 04:30:00.862319697 +0000 UTC m=+1232.943257011" watchObservedRunningTime="2026-02-14 04:30:00.865385879 +0000 UTC m=+1232.946323193" Feb 14 04:30:00 crc kubenswrapper[4867]: W0214 04:30:00.932544 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7c88887_cc0d_4b61_9ccc_e5583c27322f.slice/crio-59711ea2ab0acd44b6bdb18cb66a56f569e3f705ecdd3852368745b58d075e40 WatchSource:0}: Error finding container 59711ea2ab0acd44b6bdb18cb66a56f569e3f705ecdd3852368745b58d075e40: Status 404 returned error can't find the container with id 59711ea2ab0acd44b6bdb18cb66a56f569e3f705ecdd3852368745b58d075e40 Feb 14 04:30:00 crc kubenswrapper[4867]: I0214 04:30:00.933750 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx"] Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.402068 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6c8864b6b5-mwdd6" podUID="c4a25aef-4eee-4b48-b50a-0bf8fb0c1602" containerName="console" containerID="cri-o://c2a0f0ef4fc35a56210a1bd277b9f8c3dbe6b717fe6cba021a58146d554cbf3e" gracePeriod=15 Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.402756 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.807404 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-gzvxs"] Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.813302 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gzvxs" Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.817978 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.818197 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-vtnl4" Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.823633 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-gzvxs"] Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.854655 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6c8864b6b5-mwdd6_c4a25aef-4eee-4b48-b50a-0bf8fb0c1602/console/0.log" Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.854711 4867 generic.go:334] "Generic (PLEG): container finished" podID="c4a25aef-4eee-4b48-b50a-0bf8fb0c1602" containerID="c2a0f0ef4fc35a56210a1bd277b9f8c3dbe6b717fe6cba021a58146d554cbf3e" exitCode=2 Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.854794 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c8864b6b5-mwdd6" event={"ID":"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602","Type":"ContainerDied","Data":"c2a0f0ef4fc35a56210a1bd277b9f8c3dbe6b717fe6cba021a58146d554cbf3e"} Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.858226 4867 generic.go:334] "Generic (PLEG): container finished" podID="f7c88887-cc0d-4b61-9ccc-e5583c27322f" containerID="1ad9cf29f8ad6082a18e81d3f3baec01fbc4267f231e524551a2925f597e672d" exitCode=0 Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.858302 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx" event={"ID":"f7c88887-cc0d-4b61-9ccc-e5583c27322f","Type":"ContainerDied","Data":"1ad9cf29f8ad6082a18e81d3f3baec01fbc4267f231e524551a2925f597e672d"} Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.858358 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx" event={"ID":"f7c88887-cc0d-4b61-9ccc-e5583c27322f","Type":"ContainerStarted","Data":"59711ea2ab0acd44b6bdb18cb66a56f569e3f705ecdd3852368745b58d075e40"} Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.944788 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-config-data\") pod \"glance-db-sync-gzvxs\" (UID: \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\") " pod="openstack/glance-db-sync-gzvxs" Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.944966 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlznh\" (UniqueName: \"kubernetes.io/projected/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-kube-api-access-wlznh\") pod \"glance-db-sync-gzvxs\" (UID: \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\") " pod="openstack/glance-db-sync-gzvxs" Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.945016 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-combined-ca-bundle\") pod \"glance-db-sync-gzvxs\" (UID: \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\") " pod="openstack/glance-db-sync-gzvxs" Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.945193 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-db-sync-config-data\") pod \"glance-db-sync-gzvxs\" (UID: \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\") " pod="openstack/glance-db-sync-gzvxs" Feb 14 04:30:01 crc kubenswrapper[4867]: I0214 04:30:01.947774 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-7lpqj" podUID="16c28c0f-9310-4721-87cf-2d1bb88b5bba" containerName="ovn-controller" probeResult="failure" output=< Feb 14 04:30:01 crc kubenswrapper[4867]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 14 04:30:01 crc kubenswrapper[4867]: > Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.047492 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlznh\" (UniqueName: \"kubernetes.io/projected/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-kube-api-access-wlznh\") pod \"glance-db-sync-gzvxs\" (UID: \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\") " pod="openstack/glance-db-sync-gzvxs" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.047563 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-combined-ca-bundle\") pod \"glance-db-sync-gzvxs\" (UID: \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\") " pod="openstack/glance-db-sync-gzvxs" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.047645 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-db-sync-config-data\") pod \"glance-db-sync-gzvxs\" (UID: \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\") " pod="openstack/glance-db-sync-gzvxs" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.047724 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-config-data\") pod \"glance-db-sync-gzvxs\" (UID: \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\") " pod="openstack/glance-db-sync-gzvxs" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.053665 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-config-data\") pod \"glance-db-sync-gzvxs\" (UID: \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\") " pod="openstack/glance-db-sync-gzvxs" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.053663 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-db-sync-config-data\") pod \"glance-db-sync-gzvxs\" (UID: \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\") " pod="openstack/glance-db-sync-gzvxs" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.053704 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-combined-ca-bundle\") pod \"glance-db-sync-gzvxs\" (UID: \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\") " pod="openstack/glance-db-sync-gzvxs" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.064317 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlznh\" (UniqueName: \"kubernetes.io/projected/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-kube-api-access-wlznh\") pod \"glance-db-sync-gzvxs\" (UID: \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\") " pod="openstack/glance-db-sync-gzvxs" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.124769 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6c8864b6b5-mwdd6_c4a25aef-4eee-4b48-b50a-0bf8fb0c1602/console/0.log" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.125112 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.148397 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gzvxs" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.257703 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-service-ca\") pod \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.258150 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-serving-cert\") pod \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.258333 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-config\") pod \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.258406 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnn87\" (UniqueName: \"kubernetes.io/projected/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-kube-api-access-lnn87\") pod \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.258566 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-trusted-ca-bundle\") pod \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.258702 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-oauth-serving-cert\") pod \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.258807 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-oauth-config\") pod \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\" (UID: \"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602\") " Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.259854 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-service-ca" (OuterVolumeSpecName: "service-ca") pod "c4a25aef-4eee-4b48-b50a-0bf8fb0c1602" (UID: "c4a25aef-4eee-4b48-b50a-0bf8fb0c1602"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.259884 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-config" (OuterVolumeSpecName: "console-config") pod "c4a25aef-4eee-4b48-b50a-0bf8fb0c1602" (UID: "c4a25aef-4eee-4b48-b50a-0bf8fb0c1602"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.260547 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "c4a25aef-4eee-4b48-b50a-0bf8fb0c1602" (UID: "c4a25aef-4eee-4b48-b50a-0bf8fb0c1602"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.261041 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c4a25aef-4eee-4b48-b50a-0bf8fb0c1602" (UID: "c4a25aef-4eee-4b48-b50a-0bf8fb0c1602"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.263603 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "c4a25aef-4eee-4b48-b50a-0bf8fb0c1602" (UID: "c4a25aef-4eee-4b48-b50a-0bf8fb0c1602"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.263822 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "c4a25aef-4eee-4b48-b50a-0bf8fb0c1602" (UID: "c4a25aef-4eee-4b48-b50a-0bf8fb0c1602"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.267347 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-kube-api-access-lnn87" (OuterVolumeSpecName: "kube-api-access-lnn87") pod "c4a25aef-4eee-4b48-b50a-0bf8fb0c1602" (UID: "c4a25aef-4eee-4b48-b50a-0bf8fb0c1602"). InnerVolumeSpecName "kube-api-access-lnn87". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.361617 4867 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.361655 4867 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-service-ca\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.361669 4867 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.361681 4867 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-console-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.361699 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnn87\" (UniqueName: \"kubernetes.io/projected/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-kube-api-access-lnn87\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.361712 4867 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.361720 4867 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.388490 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-s7x2m" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.463361 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69e82314-4716-4d79-b6bf-777f09ee83f7-operator-scripts\") pod \"69e82314-4716-4d79-b6bf-777f09ee83f7\" (UID: \"69e82314-4716-4d79-b6bf-777f09ee83f7\") " Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.464069 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gg7t\" (UniqueName: \"kubernetes.io/projected/69e82314-4716-4d79-b6bf-777f09ee83f7-kube-api-access-4gg7t\") pod \"69e82314-4716-4d79-b6bf-777f09ee83f7\" (UID: \"69e82314-4716-4d79-b6bf-777f09ee83f7\") " Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.465243 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69e82314-4716-4d79-b6bf-777f09ee83f7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "69e82314-4716-4d79-b6bf-777f09ee83f7" (UID: "69e82314-4716-4d79-b6bf-777f09ee83f7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.472392 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69e82314-4716-4d79-b6bf-777f09ee83f7-kube-api-access-4gg7t" (OuterVolumeSpecName: "kube-api-access-4gg7t") pod "69e82314-4716-4d79-b6bf-777f09ee83f7" (UID: "69e82314-4716-4d79-b6bf-777f09ee83f7"). InnerVolumeSpecName "kube-api-access-4gg7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.566798 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69e82314-4716-4d79-b6bf-777f09ee83f7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.566857 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gg7t\" (UniqueName: \"kubernetes.io/projected/69e82314-4716-4d79-b6bf-777f09ee83f7-kube-api-access-4gg7t\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.804397 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-gzvxs"] Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.877487 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-s7x2m" event={"ID":"69e82314-4716-4d79-b6bf-777f09ee83f7","Type":"ContainerDied","Data":"90a619509978f686be0c8500e2ce1d1e1d540d50a43739ab895b3767799dad1c"} Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.877551 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90a619509978f686be0c8500e2ce1d1e1d540d50a43739ab895b3767799dad1c" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.877522 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-s7x2m" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.879022 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gzvxs" event={"ID":"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2","Type":"ContainerStarted","Data":"a9f2241d04b1388d688071a01711ae33a99077041ba77e2f0164bc2d8abe8d1e"} Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.881244 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6c8864b6b5-mwdd6_c4a25aef-4eee-4b48-b50a-0bf8fb0c1602/console/0.log" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.881751 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6c8864b6b5-mwdd6" event={"ID":"c4a25aef-4eee-4b48-b50a-0bf8fb0c1602","Type":"ContainerDied","Data":"999f569ca24af828fccac613f37abfd55e6b13b288390e3bcddcc9896a94a3f7"} Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.881797 4867 scope.go:117] "RemoveContainer" containerID="c2a0f0ef4fc35a56210a1bd277b9f8c3dbe6b717fe6cba021a58146d554cbf3e" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.881832 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6c8864b6b5-mwdd6" Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.927348 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6c8864b6b5-mwdd6"] Feb 14 04:30:02 crc kubenswrapper[4867]: I0214 04:30:02.936849 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6c8864b6b5-mwdd6"] Feb 14 04:30:03 crc kubenswrapper[4867]: I0214 04:30:03.026672 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4a25aef-4eee-4b48-b50a-0bf8fb0c1602" path="/var/lib/kubelet/pods/c4a25aef-4eee-4b48-b50a-0bf8fb0c1602/volumes" Feb 14 04:30:03 crc kubenswrapper[4867]: I0214 04:30:03.371639 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx" Feb 14 04:30:03 crc kubenswrapper[4867]: I0214 04:30:03.506774 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7c88887-cc0d-4b61-9ccc-e5583c27322f-secret-volume\") pod \"f7c88887-cc0d-4b61-9ccc-e5583c27322f\" (UID: \"f7c88887-cc0d-4b61-9ccc-e5583c27322f\") " Feb 14 04:30:03 crc kubenswrapper[4867]: I0214 04:30:03.506884 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dxx2\" (UniqueName: \"kubernetes.io/projected/f7c88887-cc0d-4b61-9ccc-e5583c27322f-kube-api-access-4dxx2\") pod \"f7c88887-cc0d-4b61-9ccc-e5583c27322f\" (UID: \"f7c88887-cc0d-4b61-9ccc-e5583c27322f\") " Feb 14 04:30:03 crc kubenswrapper[4867]: I0214 04:30:03.506987 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7c88887-cc0d-4b61-9ccc-e5583c27322f-config-volume\") pod \"f7c88887-cc0d-4b61-9ccc-e5583c27322f\" (UID: \"f7c88887-cc0d-4b61-9ccc-e5583c27322f\") " Feb 14 04:30:03 crc kubenswrapper[4867]: I0214 04:30:03.509131 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7c88887-cc0d-4b61-9ccc-e5583c27322f-config-volume" (OuterVolumeSpecName: "config-volume") pod "f7c88887-cc0d-4b61-9ccc-e5583c27322f" (UID: "f7c88887-cc0d-4b61-9ccc-e5583c27322f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:03 crc kubenswrapper[4867]: I0214 04:30:03.514719 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7c88887-cc0d-4b61-9ccc-e5583c27322f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f7c88887-cc0d-4b61-9ccc-e5583c27322f" (UID: "f7c88887-cc0d-4b61-9ccc-e5583c27322f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:30:03 crc kubenswrapper[4867]: I0214 04:30:03.517977 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c88887-cc0d-4b61-9ccc-e5583c27322f-kube-api-access-4dxx2" (OuterVolumeSpecName: "kube-api-access-4dxx2") pod "f7c88887-cc0d-4b61-9ccc-e5583c27322f" (UID: "f7c88887-cc0d-4b61-9ccc-e5583c27322f"). InnerVolumeSpecName "kube-api-access-4dxx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:03 crc kubenswrapper[4867]: I0214 04:30:03.611598 4867 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7c88887-cc0d-4b61-9ccc-e5583c27322f-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:03 crc kubenswrapper[4867]: I0214 04:30:03.611628 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dxx2\" (UniqueName: \"kubernetes.io/projected/f7c88887-cc0d-4b61-9ccc-e5583c27322f-kube-api-access-4dxx2\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:03 crc kubenswrapper[4867]: I0214 04:30:03.611638 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7c88887-cc0d-4b61-9ccc-e5583c27322f-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:03 crc kubenswrapper[4867]: I0214 04:30:03.902156 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx" Feb 14 04:30:03 crc kubenswrapper[4867]: I0214 04:30:03.902156 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx" event={"ID":"f7c88887-cc0d-4b61-9ccc-e5583c27322f","Type":"ContainerDied","Data":"59711ea2ab0acd44b6bdb18cb66a56f569e3f705ecdd3852368745b58d075e40"} Feb 14 04:30:03 crc kubenswrapper[4867]: I0214 04:30:03.902224 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59711ea2ab0acd44b6bdb18cb66a56f569e3f705ecdd3852368745b58d075e40" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.180465 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.650236 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k"] Feb 14 04:30:04 crc kubenswrapper[4867]: E0214 04:30:04.650749 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4a25aef-4eee-4b48-b50a-0bf8fb0c1602" containerName="console" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.650774 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4a25aef-4eee-4b48-b50a-0bf8fb0c1602" containerName="console" Feb 14 04:30:04 crc kubenswrapper[4867]: E0214 04:30:04.650798 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c88887-cc0d-4b61-9ccc-e5583c27322f" containerName="collect-profiles" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.650807 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c88887-cc0d-4b61-9ccc-e5583c27322f" containerName="collect-profiles" Feb 14 04:30:04 crc kubenswrapper[4867]: E0214 04:30:04.650827 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e82314-4716-4d79-b6bf-777f09ee83f7" containerName="mariadb-account-create-update" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.650834 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e82314-4716-4d79-b6bf-777f09ee83f7" containerName="mariadb-account-create-update" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.651021 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4a25aef-4eee-4b48-b50a-0bf8fb0c1602" containerName="console" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.651034 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c88887-cc0d-4b61-9ccc-e5583c27322f" containerName="collect-profiles" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.651048 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="69e82314-4716-4d79-b6bf-777f09ee83f7" containerName="mariadb-account-create-update" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.661678 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.671858 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k"] Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.739314 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b8cq\" (UniqueName: \"kubernetes.io/projected/2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a-kube-api-access-9b8cq\") pod \"mysqld-exporter-openstack-cell1-db-create-pjc8k\" (UID: \"2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.739668 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-pjc8k\" (UID: \"2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.749550 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-s7x2m"] Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.760575 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-s7x2m"] Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.842563 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b8cq\" (UniqueName: \"kubernetes.io/projected/2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a-kube-api-access-9b8cq\") pod \"mysqld-exporter-openstack-cell1-db-create-pjc8k\" (UID: \"2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.842776 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-pjc8k\" (UID: \"2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.844365 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-pjc8k\" (UID: \"2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.857815 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-92c4-account-create-update-r2w8b"] Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.859238 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-92c4-account-create-update-r2w8b" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.861607 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.864882 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b8cq\" (UniqueName: \"kubernetes.io/projected/2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a-kube-api-access-9b8cq\") pod \"mysqld-exporter-openstack-cell1-db-create-pjc8k\" (UID: \"2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.870648 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-92c4-account-create-update-r2w8b"] Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.944423 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8m5m\" (UniqueName: \"kubernetes.io/projected/36e07f1b-6481-42a9-a605-b472a8cc3945-kube-api-access-w8m5m\") pod \"mysqld-exporter-92c4-account-create-update-r2w8b\" (UID: \"36e07f1b-6481-42a9-a605-b472a8cc3945\") " pod="openstack/mysqld-exporter-92c4-account-create-update-r2w8b" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.944498 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36e07f1b-6481-42a9-a605-b472a8cc3945-operator-scripts\") pod \"mysqld-exporter-92c4-account-create-update-r2w8b\" (UID: \"36e07f1b-6481-42a9-a605-b472a8cc3945\") " pod="openstack/mysqld-exporter-92c4-account-create-update-r2w8b" Feb 14 04:30:04 crc kubenswrapper[4867]: I0214 04:30:04.993444 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k" Feb 14 04:30:05 crc kubenswrapper[4867]: I0214 04:30:05.014739 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69e82314-4716-4d79-b6bf-777f09ee83f7" path="/var/lib/kubelet/pods/69e82314-4716-4d79-b6bf-777f09ee83f7/volumes" Feb 14 04:30:05 crc kubenswrapper[4867]: I0214 04:30:05.046652 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8m5m\" (UniqueName: \"kubernetes.io/projected/36e07f1b-6481-42a9-a605-b472a8cc3945-kube-api-access-w8m5m\") pod \"mysqld-exporter-92c4-account-create-update-r2w8b\" (UID: \"36e07f1b-6481-42a9-a605-b472a8cc3945\") " pod="openstack/mysqld-exporter-92c4-account-create-update-r2w8b" Feb 14 04:30:05 crc kubenswrapper[4867]: I0214 04:30:05.046722 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36e07f1b-6481-42a9-a605-b472a8cc3945-operator-scripts\") pod \"mysqld-exporter-92c4-account-create-update-r2w8b\" (UID: \"36e07f1b-6481-42a9-a605-b472a8cc3945\") " pod="openstack/mysqld-exporter-92c4-account-create-update-r2w8b" Feb 14 04:30:05 crc kubenswrapper[4867]: I0214 04:30:05.047886 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36e07f1b-6481-42a9-a605-b472a8cc3945-operator-scripts\") pod \"mysqld-exporter-92c4-account-create-update-r2w8b\" (UID: \"36e07f1b-6481-42a9-a605-b472a8cc3945\") " pod="openstack/mysqld-exporter-92c4-account-create-update-r2w8b" Feb 14 04:30:05 crc kubenswrapper[4867]: I0214 04:30:05.066698 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8m5m\" (UniqueName: \"kubernetes.io/projected/36e07f1b-6481-42a9-a605-b472a8cc3945-kube-api-access-w8m5m\") pod \"mysqld-exporter-92c4-account-create-update-r2w8b\" (UID: \"36e07f1b-6481-42a9-a605-b472a8cc3945\") " pod="openstack/mysqld-exporter-92c4-account-create-update-r2w8b" Feb 14 04:30:05 crc kubenswrapper[4867]: I0214 04:30:05.224493 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-92c4-account-create-update-r2w8b" Feb 14 04:30:06 crc kubenswrapper[4867]: I0214 04:30:06.394492 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k"] Feb 14 04:30:06 crc kubenswrapper[4867]: I0214 04:30:06.558344 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-92c4-account-create-update-r2w8b"] Feb 14 04:30:06 crc kubenswrapper[4867]: I0214 04:30:06.932783 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-7lpqj" podUID="16c28c0f-9310-4721-87cf-2d1bb88b5bba" containerName="ovn-controller" probeResult="failure" output=< Feb 14 04:30:06 crc kubenswrapper[4867]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 14 04:30:06 crc kubenswrapper[4867]: > Feb 14 04:30:06 crc kubenswrapper[4867]: I0214 04:30:06.935704 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-92c4-account-create-update-r2w8b" event={"ID":"36e07f1b-6481-42a9-a605-b472a8cc3945","Type":"ContainerStarted","Data":"7ee48e595ead334c45b0c14aeec7251dc9cd4d60d85c2a40a47348b3ee0e687a"} Feb 14 04:30:06 crc kubenswrapper[4867]: I0214 04:30:06.935745 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-92c4-account-create-update-r2w8b" event={"ID":"36e07f1b-6481-42a9-a605-b472a8cc3945","Type":"ContainerStarted","Data":"78c1a5c6ba3bac40138a220ca33469d903b12f8b6d092ee2c71a5440c73661de"} Feb 14 04:30:06 crc kubenswrapper[4867]: I0214 04:30:06.939283 4867 generic.go:334] "Generic (PLEG): container finished" podID="2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a" containerID="50f6a1e55c135273f16192c4d930b15a06776fce11c739aadacaa3a89fc4b153" exitCode=0 Feb 14 04:30:06 crc kubenswrapper[4867]: I0214 04:30:06.939344 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k" event={"ID":"2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a","Type":"ContainerDied","Data":"50f6a1e55c135273f16192c4d930b15a06776fce11c739aadacaa3a89fc4b153"} Feb 14 04:30:06 crc kubenswrapper[4867]: I0214 04:30:06.939371 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k" event={"ID":"2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a","Type":"ContainerStarted","Data":"38c9afded06746b29bfa2201d287dd3b9aab364027f290cc790a3d8432ff496a"} Feb 14 04:30:06 crc kubenswrapper[4867]: I0214 04:30:06.962338 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-92c4-account-create-update-r2w8b" podStartSLOduration=2.9623119989999998 podStartE2EDuration="2.962311999s" podCreationTimestamp="2026-02-14 04:30:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:30:06.951588515 +0000 UTC m=+1239.032525849" watchObservedRunningTime="2026-02-14 04:30:06.962311999 +0000 UTC m=+1239.043249313" Feb 14 04:30:06 crc kubenswrapper[4867]: I0214 04:30:06.987005 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:30:06 crc kubenswrapper[4867]: I0214 04:30:06.996561 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-dznst" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.245666 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-7lpqj-config-4bc2q"] Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.247180 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.252724 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.254934 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7lpqj-config-4bc2q"] Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.297056 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-log-ovn\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.297103 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-additional-scripts\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.297202 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76lbx\" (UniqueName: \"kubernetes.io/projected/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-kube-api-access-76lbx\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.297220 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-run-ovn\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.297296 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-run\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.297371 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-scripts\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.398956 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-additional-scripts\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.399011 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-log-ovn\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.399090 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76lbx\" (UniqueName: \"kubernetes.io/projected/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-kube-api-access-76lbx\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.399110 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-run-ovn\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.399163 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-run\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.399210 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-scripts\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.400661 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-run-ovn\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.400707 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-run\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.400708 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-log-ovn\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.401222 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-additional-scripts\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.401605 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-scripts\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.427835 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76lbx\" (UniqueName: \"kubernetes.io/projected/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-kube-api-access-76lbx\") pod \"ovn-controller-7lpqj-config-4bc2q\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.574345 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.604162 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.609926 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1d9f9909-1442-4d83-b2aa-0f58d4022338-etc-swift\") pod \"swift-storage-0\" (UID: \"1d9f9909-1442-4d83-b2aa-0f58d4022338\") " pod="openstack/swift-storage-0" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.715415 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.969886 4867 generic.go:334] "Generic (PLEG): container finished" podID="36e07f1b-6481-42a9-a605-b472a8cc3945" containerID="7ee48e595ead334c45b0c14aeec7251dc9cd4d60d85c2a40a47348b3ee0e687a" exitCode=0 Feb 14 04:30:07 crc kubenswrapper[4867]: I0214 04:30:07.970120 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-92c4-account-create-update-r2w8b" event={"ID":"36e07f1b-6481-42a9-a605-b472a8cc3945","Type":"ContainerDied","Data":"7ee48e595ead334c45b0c14aeec7251dc9cd4d60d85c2a40a47348b3ee0e687a"} Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.151146 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7lpqj-config-4bc2q"] Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.160202 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-wrzv9"] Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.162694 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wrzv9" Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.165625 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.170453 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wrzv9"] Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.217058 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27300ba4-09df-4f4c-b247-4ba37572690d-operator-scripts\") pod \"root-account-create-update-wrzv9\" (UID: \"27300ba4-09df-4f4c-b247-4ba37572690d\") " pod="openstack/root-account-create-update-wrzv9" Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.217217 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mz5x\" (UniqueName: \"kubernetes.io/projected/27300ba4-09df-4f4c-b247-4ba37572690d-kube-api-access-7mz5x\") pod \"root-account-create-update-wrzv9\" (UID: \"27300ba4-09df-4f4c-b247-4ba37572690d\") " pod="openstack/root-account-create-update-wrzv9" Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.319047 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mz5x\" (UniqueName: \"kubernetes.io/projected/27300ba4-09df-4f4c-b247-4ba37572690d-kube-api-access-7mz5x\") pod \"root-account-create-update-wrzv9\" (UID: \"27300ba4-09df-4f4c-b247-4ba37572690d\") " pod="openstack/root-account-create-update-wrzv9" Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.319190 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27300ba4-09df-4f4c-b247-4ba37572690d-operator-scripts\") pod \"root-account-create-update-wrzv9\" (UID: \"27300ba4-09df-4f4c-b247-4ba37572690d\") " pod="openstack/root-account-create-update-wrzv9" Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.320221 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27300ba4-09df-4f4c-b247-4ba37572690d-operator-scripts\") pod \"root-account-create-update-wrzv9\" (UID: \"27300ba4-09df-4f4c-b247-4ba37572690d\") " pod="openstack/root-account-create-update-wrzv9" Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.341551 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mz5x\" (UniqueName: \"kubernetes.io/projected/27300ba4-09df-4f4c-b247-4ba37572690d-kube-api-access-7mz5x\") pod \"root-account-create-update-wrzv9\" (UID: \"27300ba4-09df-4f4c-b247-4ba37572690d\") " pod="openstack/root-account-create-update-wrzv9" Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.439797 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 14 04:30:08 crc kubenswrapper[4867]: W0214 04:30:08.450941 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d9f9909_1442_4d83_b2aa_0f58d4022338.slice/crio-23eb505016f90201653e7b35eef6125dd740c7a5fd2dd394403138863672d2e6 WatchSource:0}: Error finding container 23eb505016f90201653e7b35eef6125dd740c7a5fd2dd394403138863672d2e6: Status 404 returned error can't find the container with id 23eb505016f90201653e7b35eef6125dd740c7a5fd2dd394403138863672d2e6 Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.458720 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.477868 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k" Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.518878 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wrzv9" Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.522607 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a-operator-scripts\") pod \"2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a\" (UID: \"2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a\") " Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.523216 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b8cq\" (UniqueName: \"kubernetes.io/projected/2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a-kube-api-access-9b8cq\") pod \"2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a\" (UID: \"2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a\") " Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.523424 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a" (UID: "2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.523891 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.526694 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a-kube-api-access-9b8cq" (OuterVolumeSpecName: "kube-api-access-9b8cq") pod "2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a" (UID: "2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a"). InnerVolumeSpecName "kube-api-access-9b8cq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.626379 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b8cq\" (UniqueName: \"kubernetes.io/projected/2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a-kube-api-access-9b8cq\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.985299 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k" Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.982196 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k" event={"ID":"2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a","Type":"ContainerDied","Data":"38c9afded06746b29bfa2201d287dd3b9aab364027f290cc790a3d8432ff496a"} Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.986230 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38c9afded06746b29bfa2201d287dd3b9aab364027f290cc790a3d8432ff496a" Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.987401 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7lpqj-config-4bc2q" event={"ID":"5aa59e7c-c4ba-4a88-9744-c2b0752de11e","Type":"ContainerStarted","Data":"026325c8f6cfe452fbbf5a283d6335d1b62be9618bc89fae94bbe5dcc2c9e96d"} Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.987457 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7lpqj-config-4bc2q" event={"ID":"5aa59e7c-c4ba-4a88-9744-c2b0752de11e","Type":"ContainerStarted","Data":"75debe79a7aa9be3c0df6cc3f6875a2099e92a0112633516361a18ba6a6f487b"} Feb 14 04:30:08 crc kubenswrapper[4867]: I0214 04:30:08.990402 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d9f9909-1442-4d83-b2aa-0f58d4022338","Type":"ContainerStarted","Data":"23eb505016f90201653e7b35eef6125dd740c7a5fd2dd394403138863672d2e6"} Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.021036 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-7lpqj-config-4bc2q" podStartSLOduration=2.021014042 podStartE2EDuration="2.021014042s" podCreationTimestamp="2026-02-14 04:30:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:30:09.005889141 +0000 UTC m=+1241.086826455" watchObservedRunningTime="2026-02-14 04:30:09.021014042 +0000 UTC m=+1241.101951376" Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.027379 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wrzv9"] Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.421486 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-92c4-account-create-update-r2w8b" Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.447437 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36e07f1b-6481-42a9-a605-b472a8cc3945-operator-scripts\") pod \"36e07f1b-6481-42a9-a605-b472a8cc3945\" (UID: \"36e07f1b-6481-42a9-a605-b472a8cc3945\") " Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.447551 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8m5m\" (UniqueName: \"kubernetes.io/projected/36e07f1b-6481-42a9-a605-b472a8cc3945-kube-api-access-w8m5m\") pod \"36e07f1b-6481-42a9-a605-b472a8cc3945\" (UID: \"36e07f1b-6481-42a9-a605-b472a8cc3945\") " Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.448364 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36e07f1b-6481-42a9-a605-b472a8cc3945-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "36e07f1b-6481-42a9-a605-b472a8cc3945" (UID: "36e07f1b-6481-42a9-a605-b472a8cc3945"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.453568 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36e07f1b-6481-42a9-a605-b472a8cc3945-kube-api-access-w8m5m" (OuterVolumeSpecName: "kube-api-access-w8m5m") pod "36e07f1b-6481-42a9-a605-b472a8cc3945" (UID: "36e07f1b-6481-42a9-a605-b472a8cc3945"). InnerVolumeSpecName "kube-api-access-w8m5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.550731 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/36e07f1b-6481-42a9-a605-b472a8cc3945-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.550767 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8m5m\" (UniqueName: \"kubernetes.io/projected/36e07f1b-6481-42a9-a605-b472a8cc3945-kube-api-access-w8m5m\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.965897 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 04:30:09 crc kubenswrapper[4867]: E0214 04:30:09.966661 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a" containerName="mariadb-database-create" Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.966679 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a" containerName="mariadb-database-create" Feb 14 04:30:09 crc kubenswrapper[4867]: E0214 04:30:09.966699 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36e07f1b-6481-42a9-a605-b472a8cc3945" containerName="mariadb-account-create-update" Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.966707 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="36e07f1b-6481-42a9-a605-b472a8cc3945" containerName="mariadb-account-create-update" Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.966968 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="36e07f1b-6481-42a9-a605-b472a8cc3945" containerName="mariadb-account-create-update" Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.966988 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a" containerName="mariadb-database-create" Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.967806 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.976300 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 14 04:30:09 crc kubenswrapper[4867]: I0214 04:30:09.984098 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.037968 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-92c4-account-create-update-r2w8b" event={"ID":"36e07f1b-6481-42a9-a605-b472a8cc3945","Type":"ContainerDied","Data":"78c1a5c6ba3bac40138a220ca33469d903b12f8b6d092ee2c71a5440c73661de"} Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.038012 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78c1a5c6ba3bac40138a220ca33469d903b12f8b6d092ee2c71a5440c73661de" Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.038071 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-92c4-account-create-update-r2w8b" Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.039846 4867 generic.go:334] "Generic (PLEG): container finished" podID="27300ba4-09df-4f4c-b247-4ba37572690d" containerID="7429acc7d9da73b9750d17def9d8240155c7d41dbd196ce0d4607a1d9b14419f" exitCode=0 Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.039917 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wrzv9" event={"ID":"27300ba4-09df-4f4c-b247-4ba37572690d","Type":"ContainerDied","Data":"7429acc7d9da73b9750d17def9d8240155c7d41dbd196ce0d4607a1d9b14419f"} Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.039961 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wrzv9" event={"ID":"27300ba4-09df-4f4c-b247-4ba37572690d","Type":"ContainerStarted","Data":"43b73a74f924a2179cf54434848a87156879532d08026905255ec14a3199eb2e"} Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.045576 4867 generic.go:334] "Generic (PLEG): container finished" podID="5aa59e7c-c4ba-4a88-9744-c2b0752de11e" containerID="026325c8f6cfe452fbbf5a283d6335d1b62be9618bc89fae94bbe5dcc2c9e96d" exitCode=0 Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.045643 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7lpqj-config-4bc2q" event={"ID":"5aa59e7c-c4ba-4a88-9744-c2b0752de11e","Type":"ContainerDied","Data":"026325c8f6cfe452fbbf5a283d6335d1b62be9618bc89fae94bbe5dcc2c9e96d"} Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.172976 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e89a71e-e837-4d98-a707-27908a8342bc-config-data\") pod \"mysqld-exporter-0\" (UID: \"4e89a71e-e837-4d98-a707-27908a8342bc\") " pod="openstack/mysqld-exporter-0" Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.173131 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e89a71e-e837-4d98-a707-27908a8342bc-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"4e89a71e-e837-4d98-a707-27908a8342bc\") " pod="openstack/mysqld-exporter-0" Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.173176 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zlkj\" (UniqueName: \"kubernetes.io/projected/4e89a71e-e837-4d98-a707-27908a8342bc-kube-api-access-9zlkj\") pod \"mysqld-exporter-0\" (UID: \"4e89a71e-e837-4d98-a707-27908a8342bc\") " pod="openstack/mysqld-exporter-0" Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.275351 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e89a71e-e837-4d98-a707-27908a8342bc-config-data\") pod \"mysqld-exporter-0\" (UID: \"4e89a71e-e837-4d98-a707-27908a8342bc\") " pod="openstack/mysqld-exporter-0" Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.275473 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e89a71e-e837-4d98-a707-27908a8342bc-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"4e89a71e-e837-4d98-a707-27908a8342bc\") " pod="openstack/mysqld-exporter-0" Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.275535 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zlkj\" (UniqueName: \"kubernetes.io/projected/4e89a71e-e837-4d98-a707-27908a8342bc-kube-api-access-9zlkj\") pod \"mysqld-exporter-0\" (UID: \"4e89a71e-e837-4d98-a707-27908a8342bc\") " pod="openstack/mysqld-exporter-0" Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.281795 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e89a71e-e837-4d98-a707-27908a8342bc-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"4e89a71e-e837-4d98-a707-27908a8342bc\") " pod="openstack/mysqld-exporter-0" Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.282324 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e89a71e-e837-4d98-a707-27908a8342bc-config-data\") pod \"mysqld-exporter-0\" (UID: \"4e89a71e-e837-4d98-a707-27908a8342bc\") " pod="openstack/mysqld-exporter-0" Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.293942 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zlkj\" (UniqueName: \"kubernetes.io/projected/4e89a71e-e837-4d98-a707-27908a8342bc-kube-api-access-9zlkj\") pod \"mysqld-exporter-0\" (UID: \"4e89a71e-e837-4d98-a707-27908a8342bc\") " pod="openstack/mysqld-exporter-0" Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.395201 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 14 04:30:10 crc kubenswrapper[4867]: I0214 04:30:10.994024 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.058466 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"4e89a71e-e837-4d98-a707-27908a8342bc","Type":"ContainerStarted","Data":"5b4f6da6858b80468a9ce475d2d3c8ccdc38ea567758289aef5a49879e4b28e8"} Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.403176 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.417820 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.625036 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.631349 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wrzv9" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.736609 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-run-ovn\") pod \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.736715 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27300ba4-09df-4f4c-b247-4ba37572690d-operator-scripts\") pod \"27300ba4-09df-4f4c-b247-4ba37572690d\" (UID: \"27300ba4-09df-4f4c-b247-4ba37572690d\") " Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.736753 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-additional-scripts\") pod \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.736838 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-run\") pod \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.736904 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-scripts\") pod \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.736971 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76lbx\" (UniqueName: \"kubernetes.io/projected/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-kube-api-access-76lbx\") pod \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.737114 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mz5x\" (UniqueName: \"kubernetes.io/projected/27300ba4-09df-4f4c-b247-4ba37572690d-kube-api-access-7mz5x\") pod \"27300ba4-09df-4f4c-b247-4ba37572690d\" (UID: \"27300ba4-09df-4f4c-b247-4ba37572690d\") " Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.737145 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-log-ovn\") pod \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\" (UID: \"5aa59e7c-c4ba-4a88-9744-c2b0752de11e\") " Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.737638 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "5aa59e7c-c4ba-4a88-9744-c2b0752de11e" (UID: "5aa59e7c-c4ba-4a88-9744-c2b0752de11e"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.737724 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-run" (OuterVolumeSpecName: "var-run") pod "5aa59e7c-c4ba-4a88-9744-c2b0752de11e" (UID: "5aa59e7c-c4ba-4a88-9744-c2b0752de11e"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.737795 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "5aa59e7c-c4ba-4a88-9744-c2b0752de11e" (UID: "5aa59e7c-c4ba-4a88-9744-c2b0752de11e"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.738201 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27300ba4-09df-4f4c-b247-4ba37572690d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "27300ba4-09df-4f4c-b247-4ba37572690d" (UID: "27300ba4-09df-4f4c-b247-4ba37572690d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.738733 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "5aa59e7c-c4ba-4a88-9744-c2b0752de11e" (UID: "5aa59e7c-c4ba-4a88-9744-c2b0752de11e"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.742805 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-scripts" (OuterVolumeSpecName: "scripts") pod "5aa59e7c-c4ba-4a88-9744-c2b0752de11e" (UID: "5aa59e7c-c4ba-4a88-9744-c2b0752de11e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.742869 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-kube-api-access-76lbx" (OuterVolumeSpecName: "kube-api-access-76lbx") pod "5aa59e7c-c4ba-4a88-9744-c2b0752de11e" (UID: "5aa59e7c-c4ba-4a88-9744-c2b0752de11e"). InnerVolumeSpecName "kube-api-access-76lbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.745185 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27300ba4-09df-4f4c-b247-4ba37572690d-kube-api-access-7mz5x" (OuterVolumeSpecName: "kube-api-access-7mz5x") pod "27300ba4-09df-4f4c-b247-4ba37572690d" (UID: "27300ba4-09df-4f4c-b247-4ba37572690d"). InnerVolumeSpecName "kube-api-access-7mz5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.839990 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76lbx\" (UniqueName: \"kubernetes.io/projected/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-kube-api-access-76lbx\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.840034 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mz5x\" (UniqueName: \"kubernetes.io/projected/27300ba4-09df-4f4c-b247-4ba37572690d-kube-api-access-7mz5x\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.840045 4867 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.840055 4867 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.840064 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/27300ba4-09df-4f4c-b247-4ba37572690d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.840072 4867 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.840081 4867 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-var-run\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.840089 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5aa59e7c-c4ba-4a88-9744-c2b0752de11e-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:11 crc kubenswrapper[4867]: I0214 04:30:11.922745 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-7lpqj" Feb 14 04:30:12 crc kubenswrapper[4867]: I0214 04:30:12.086774 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d9f9909-1442-4d83-b2aa-0f58d4022338","Type":"ContainerStarted","Data":"398bb71b3312708e844a7aaca2933d075418d20c032c02c76d00e463b4d57eae"} Feb 14 04:30:12 crc kubenswrapper[4867]: I0214 04:30:12.091260 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wrzv9" Feb 14 04:30:12 crc kubenswrapper[4867]: I0214 04:30:12.091268 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wrzv9" event={"ID":"27300ba4-09df-4f4c-b247-4ba37572690d","Type":"ContainerDied","Data":"43b73a74f924a2179cf54434848a87156879532d08026905255ec14a3199eb2e"} Feb 14 04:30:12 crc kubenswrapper[4867]: I0214 04:30:12.091436 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43b73a74f924a2179cf54434848a87156879532d08026905255ec14a3199eb2e" Feb 14 04:30:12 crc kubenswrapper[4867]: I0214 04:30:12.092719 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7lpqj-config-4bc2q" event={"ID":"5aa59e7c-c4ba-4a88-9744-c2b0752de11e","Type":"ContainerDied","Data":"75debe79a7aa9be3c0df6cc3f6875a2099e92a0112633516361a18ba6a6f487b"} Feb 14 04:30:12 crc kubenswrapper[4867]: I0214 04:30:12.092753 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7lpqj-config-4bc2q" Feb 14 04:30:12 crc kubenswrapper[4867]: I0214 04:30:12.092770 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75debe79a7aa9be3c0df6cc3f6875a2099e92a0112633516361a18ba6a6f487b" Feb 14 04:30:12 crc kubenswrapper[4867]: I0214 04:30:12.103914 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:12 crc kubenswrapper[4867]: I0214 04:30:12.138213 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-7lpqj-config-4bc2q"] Feb 14 04:30:12 crc kubenswrapper[4867]: I0214 04:30:12.155107 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-7lpqj-config-4bc2q"] Feb 14 04:30:13 crc kubenswrapper[4867]: I0214 04:30:13.011071 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5aa59e7c-c4ba-4a88-9744-c2b0752de11e" path="/var/lib/kubelet/pods/5aa59e7c-c4ba-4a88-9744-c2b0752de11e/volumes" Feb 14 04:30:13 crc kubenswrapper[4867]: I0214 04:30:13.105677 4867 generic.go:334] "Generic (PLEG): container finished" podID="647ba30a-5526-4e27-9095-680c31ff4eb3" containerID="2985355e95eee0dc957c0e21e160693198281b44121fdf6f1cd86e16275d7eea" exitCode=0 Feb 14 04:30:13 crc kubenswrapper[4867]: I0214 04:30:13.105743 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"647ba30a-5526-4e27-9095-680c31ff4eb3","Type":"ContainerDied","Data":"2985355e95eee0dc957c0e21e160693198281b44121fdf6f1cd86e16275d7eea"} Feb 14 04:30:13 crc kubenswrapper[4867]: I0214 04:30:13.111297 4867 generic.go:334] "Generic (PLEG): container finished" podID="e1e022d9-e2db-41eb-bbc8-36a85211a141" containerID="262c6cf6afafb6e46f694f14f681aa82c37388eec461cacbdee05ba39ec4b230" exitCode=0 Feb 14 04:30:13 crc kubenswrapper[4867]: I0214 04:30:13.111378 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e1e022d9-e2db-41eb-bbc8-36a85211a141","Type":"ContainerDied","Data":"262c6cf6afafb6e46f694f14f681aa82c37388eec461cacbdee05ba39ec4b230"} Feb 14 04:30:13 crc kubenswrapper[4867]: I0214 04:30:13.112998 4867 generic.go:334] "Generic (PLEG): container finished" podID="9bba5174-edd6-4e59-8b84-6c50439be88e" containerID="cdd34e48fd8308f6fcb0879223cfb287fe4fad8d2d81caedd7f537716f873d08" exitCode=0 Feb 14 04:30:13 crc kubenswrapper[4867]: I0214 04:30:13.113668 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9bba5174-edd6-4e59-8b84-6c50439be88e","Type":"ContainerDied","Data":"cdd34e48fd8308f6fcb0879223cfb287fe4fad8d2d81caedd7f537716f873d08"} Feb 14 04:30:14 crc kubenswrapper[4867]: I0214 04:30:14.759643 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-wrzv9"] Feb 14 04:30:14 crc kubenswrapper[4867]: I0214 04:30:14.774501 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-wrzv9"] Feb 14 04:30:15 crc kubenswrapper[4867]: I0214 04:30:15.012923 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27300ba4-09df-4f4c-b247-4ba37572690d" path="/var/lib/kubelet/pods/27300ba4-09df-4f4c-b247-4ba37572690d/volumes" Feb 14 04:30:15 crc kubenswrapper[4867]: I0214 04:30:15.926398 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 04:30:15 crc kubenswrapper[4867]: I0214 04:30:15.927104 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerName="prometheus" containerID="cri-o://4692a5c730542a5c7abd2ae37dcefb0197b935ec9ce8b16d0469afd4527db7f5" gracePeriod=600 Feb 14 04:30:15 crc kubenswrapper[4867]: I0214 04:30:15.927202 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerName="thanos-sidecar" containerID="cri-o://ac3d49c697d1a12bf76bc8aaf7a8fbec4fa259f04adb512889e6f0cf63f6e93d" gracePeriod=600 Feb 14 04:30:15 crc kubenswrapper[4867]: I0214 04:30:15.927258 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerName="config-reloader" containerID="cri-o://c62b1e6f71da03f759075e45d595dab84ceabe23bcfb61adf4ba71561bb4ec1e" gracePeriod=600 Feb 14 04:30:16 crc kubenswrapper[4867]: I0214 04:30:16.140652 4867 generic.go:334] "Generic (PLEG): container finished" podID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerID="ac3d49c697d1a12bf76bc8aaf7a8fbec4fa259f04adb512889e6f0cf63f6e93d" exitCode=0 Feb 14 04:30:16 crc kubenswrapper[4867]: I0214 04:30:16.140688 4867 generic.go:334] "Generic (PLEG): container finished" podID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerID="4692a5c730542a5c7abd2ae37dcefb0197b935ec9ce8b16d0469afd4527db7f5" exitCode=0 Feb 14 04:30:16 crc kubenswrapper[4867]: I0214 04:30:16.140704 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c755009c-2bb6-4f8f-9b53-460a0e4c9447","Type":"ContainerDied","Data":"ac3d49c697d1a12bf76bc8aaf7a8fbec4fa259f04adb512889e6f0cf63f6e93d"} Feb 14 04:30:16 crc kubenswrapper[4867]: I0214 04:30:16.140759 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c755009c-2bb6-4f8f-9b53-460a0e4c9447","Type":"ContainerDied","Data":"4692a5c730542a5c7abd2ae37dcefb0197b935ec9ce8b16d0469afd4527db7f5"} Feb 14 04:30:16 crc kubenswrapper[4867]: I0214 04:30:16.403364 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.136:9090/-/ready\": dial tcp 10.217.0.136:9090: connect: connection refused" Feb 14 04:30:17 crc kubenswrapper[4867]: I0214 04:30:17.153266 4867 generic.go:334] "Generic (PLEG): container finished" podID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerID="c62b1e6f71da03f759075e45d595dab84ceabe23bcfb61adf4ba71561bb4ec1e" exitCode=0 Feb 14 04:30:17 crc kubenswrapper[4867]: I0214 04:30:17.153352 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c755009c-2bb6-4f8f-9b53-460a0e4c9447","Type":"ContainerDied","Data":"c62b1e6f71da03f759075e45d595dab84ceabe23bcfb61adf4ba71561bb4ec1e"} Feb 14 04:30:18 crc kubenswrapper[4867]: I0214 04:30:18.200901 4867 generic.go:334] "Generic (PLEG): container finished" podID="6bc83863-74f4-4509-969c-0f3305a542a8" containerID="da72547c3496fadaa474b36d059bf8582881ee27c6b6aa73c9aa360c8e76f26d" exitCode=0 Feb 14 04:30:18 crc kubenswrapper[4867]: I0214 04:30:18.200953 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"6bc83863-74f4-4509-969c-0f3305a542a8","Type":"ContainerDied","Data":"da72547c3496fadaa474b36d059bf8582881ee27c6b6aa73c9aa360c8e76f26d"} Feb 14 04:30:19 crc kubenswrapper[4867]: I0214 04:30:19.787685 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-k62wg"] Feb 14 04:30:19 crc kubenswrapper[4867]: E0214 04:30:19.788759 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27300ba4-09df-4f4c-b247-4ba37572690d" containerName="mariadb-account-create-update" Feb 14 04:30:19 crc kubenswrapper[4867]: I0214 04:30:19.788779 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="27300ba4-09df-4f4c-b247-4ba37572690d" containerName="mariadb-account-create-update" Feb 14 04:30:19 crc kubenswrapper[4867]: E0214 04:30:19.788800 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aa59e7c-c4ba-4a88-9744-c2b0752de11e" containerName="ovn-config" Feb 14 04:30:19 crc kubenswrapper[4867]: I0214 04:30:19.788806 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aa59e7c-c4ba-4a88-9744-c2b0752de11e" containerName="ovn-config" Feb 14 04:30:19 crc kubenswrapper[4867]: I0214 04:30:19.789113 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="27300ba4-09df-4f4c-b247-4ba37572690d" containerName="mariadb-account-create-update" Feb 14 04:30:19 crc kubenswrapper[4867]: I0214 04:30:19.789141 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aa59e7c-c4ba-4a88-9744-c2b0752de11e" containerName="ovn-config" Feb 14 04:30:19 crc kubenswrapper[4867]: I0214 04:30:19.790095 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-k62wg" Feb 14 04:30:19 crc kubenswrapper[4867]: I0214 04:30:19.794898 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 14 04:30:19 crc kubenswrapper[4867]: I0214 04:30:19.808321 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-k62wg"] Feb 14 04:30:19 crc kubenswrapper[4867]: I0214 04:30:19.937963 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d562\" (UniqueName: \"kubernetes.io/projected/f0d44618-795d-4cc5-a98b-c0c5d77ffdcb-kube-api-access-6d562\") pod \"root-account-create-update-k62wg\" (UID: \"f0d44618-795d-4cc5-a98b-c0c5d77ffdcb\") " pod="openstack/root-account-create-update-k62wg" Feb 14 04:30:19 crc kubenswrapper[4867]: I0214 04:30:19.938119 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0d44618-795d-4cc5-a98b-c0c5d77ffdcb-operator-scripts\") pod \"root-account-create-update-k62wg\" (UID: \"f0d44618-795d-4cc5-a98b-c0c5d77ffdcb\") " pod="openstack/root-account-create-update-k62wg" Feb 14 04:30:20 crc kubenswrapper[4867]: I0214 04:30:20.039890 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0d44618-795d-4cc5-a98b-c0c5d77ffdcb-operator-scripts\") pod \"root-account-create-update-k62wg\" (UID: \"f0d44618-795d-4cc5-a98b-c0c5d77ffdcb\") " pod="openstack/root-account-create-update-k62wg" Feb 14 04:30:20 crc kubenswrapper[4867]: I0214 04:30:20.040076 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d562\" (UniqueName: \"kubernetes.io/projected/f0d44618-795d-4cc5-a98b-c0c5d77ffdcb-kube-api-access-6d562\") pod \"root-account-create-update-k62wg\" (UID: \"f0d44618-795d-4cc5-a98b-c0c5d77ffdcb\") " pod="openstack/root-account-create-update-k62wg" Feb 14 04:30:20 crc kubenswrapper[4867]: I0214 04:30:20.040698 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0d44618-795d-4cc5-a98b-c0c5d77ffdcb-operator-scripts\") pod \"root-account-create-update-k62wg\" (UID: \"f0d44618-795d-4cc5-a98b-c0c5d77ffdcb\") " pod="openstack/root-account-create-update-k62wg" Feb 14 04:30:20 crc kubenswrapper[4867]: I0214 04:30:20.073389 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d562\" (UniqueName: \"kubernetes.io/projected/f0d44618-795d-4cc5-a98b-c0c5d77ffdcb-kube-api-access-6d562\") pod \"root-account-create-update-k62wg\" (UID: \"f0d44618-795d-4cc5-a98b-c0c5d77ffdcb\") " pod="openstack/root-account-create-update-k62wg" Feb 14 04:30:20 crc kubenswrapper[4867]: I0214 04:30:20.114154 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-k62wg" Feb 14 04:30:21 crc kubenswrapper[4867]: I0214 04:30:21.403536 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.136:9090/-/ready\": dial tcp 10.217.0.136:9090: connect: connection refused" Feb 14 04:30:21 crc kubenswrapper[4867]: E0214 04:30:21.543609 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Feb 14 04:30:21 crc kubenswrapper[4867]: E0214 04:30:21.544694 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wlznh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-gzvxs_openstack(e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:30:21 crc kubenswrapper[4867]: E0214 04:30:21.546422 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-gzvxs" podUID="e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.162864 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.173443 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-k62wg"] Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.268403 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"6bc83863-74f4-4509-969c-0f3305a542a8","Type":"ContainerStarted","Data":"88c159d1a43dc50e68ca5c624034eb8becafe830a496b5d85f7c11e183f4f8b3"} Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.268818 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.278958 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"647ba30a-5526-4e27-9095-680c31ff4eb3","Type":"ContainerStarted","Data":"47b0dc8cf76452537b6a08713121a73a00752e3dfe3f1a9f1b2a3edca2f295a0"} Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.279212 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.295097 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d9f9909-1442-4d83-b2aa-0f58d4022338","Type":"ContainerStarted","Data":"a8c244694fc7435e98a557f3f04e1495068244e31e093a964628b855e26e1004"} Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.321784 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e1e022d9-e2db-41eb-bbc8-36a85211a141","Type":"ContainerStarted","Data":"1c9536ee76daa0952682b4376762a2a587b803ad41d92cac29e3c1b5557102c7"} Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.322654 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.324680 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-2\") pod \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.324728 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpz8v\" (UniqueName: \"kubernetes.io/projected/c755009c-2bb6-4f8f-9b53-460a0e4c9447-kube-api-access-tpz8v\") pod \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.324750 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c755009c-2bb6-4f8f-9b53-460a0e4c9447-tls-assets\") pod \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.324777 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-thanos-prometheus-http-client-file\") pod \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.324962 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eda836b-4d69-49e8-a582-e29da56fd005\") pod \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.325000 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-web-config\") pod \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.325023 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-0\") pod \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.325041 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-1\") pod \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.325100 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-config\") pod \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.325187 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c755009c-2bb6-4f8f-9b53-460a0e4c9447-config-out\") pod \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\" (UID: \"c755009c-2bb6-4f8f-9b53-460a0e4c9447\") " Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.327228 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "c755009c-2bb6-4f8f-9b53-460a0e4c9447" (UID: "c755009c-2bb6-4f8f-9b53-460a0e4c9447"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.327315 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "c755009c-2bb6-4f8f-9b53-460a0e4c9447" (UID: "c755009c-2bb6-4f8f-9b53-460a0e4c9447"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.327739 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "c755009c-2bb6-4f8f-9b53-460a0e4c9447" (UID: "c755009c-2bb6-4f8f-9b53-460a0e4c9447"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.328566 4867 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.328582 4867 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.328593 4867 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c755009c-2bb6-4f8f-9b53-460a0e4c9447-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.334756 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "c755009c-2bb6-4f8f-9b53-460a0e4c9447" (UID: "c755009c-2bb6-4f8f-9b53-460a0e4c9447"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.336687 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c755009c-2bb6-4f8f-9b53-460a0e4c9447-kube-api-access-tpz8v" (OuterVolumeSpecName: "kube-api-access-tpz8v") pod "c755009c-2bb6-4f8f-9b53-460a0e4c9447" (UID: "c755009c-2bb6-4f8f-9b53-460a0e4c9447"). InnerVolumeSpecName "kube-api-access-tpz8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.342667 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c755009c-2bb6-4f8f-9b53-460a0e4c9447-config-out" (OuterVolumeSpecName: "config-out") pod "c755009c-2bb6-4f8f-9b53-460a0e4c9447" (UID: "c755009c-2bb6-4f8f-9b53-460a0e4c9447"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.344423 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c755009c-2bb6-4f8f-9b53-460a0e4c9447-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "c755009c-2bb6-4f8f-9b53-460a0e4c9447" (UID: "c755009c-2bb6-4f8f-9b53-460a0e4c9447"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.344571 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-config" (OuterVolumeSpecName: "config") pod "c755009c-2bb6-4f8f-9b53-460a0e4c9447" (UID: "c755009c-2bb6-4f8f-9b53-460a0e4c9447"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.346139 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.346195 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c755009c-2bb6-4f8f-9b53-460a0e4c9447","Type":"ContainerDied","Data":"bf0605b193983ab03177306fae17d696c18a8e3789f84b06d5ef6b3d006f8d77"} Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.346268 4867 scope.go:117] "RemoveContainer" containerID="ac3d49c697d1a12bf76bc8aaf7a8fbec4fa259f04adb512889e6f0cf63f6e93d" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.348154 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=-9223371940.506638 podStartE2EDuration="1m36.348138004s" podCreationTimestamp="2026-02-14 04:28:46 +0000 UTC" firstStartedPulling="2026-02-14 04:28:49.250294097 +0000 UTC m=+1161.331231411" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:30:22.319129885 +0000 UTC m=+1254.400067199" watchObservedRunningTime="2026-02-14 04:30:22.348138004 +0000 UTC m=+1254.429075318" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.364548 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9bba5174-edd6-4e59-8b84-6c50439be88e","Type":"ContainerStarted","Data":"3a805b4a9b14096595ccbe2f2670f7820f5c356d6f6f2f30fc1ba861c96ba989"} Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.364801 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 14 04:30:22 crc kubenswrapper[4867]: E0214 04:30:22.368469 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-gzvxs" podUID="e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.380770 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=47.562173676 podStartE2EDuration="1m36.380747158s" podCreationTimestamp="2026-02-14 04:28:46 +0000 UTC" firstStartedPulling="2026-02-14 04:28:49.106641381 +0000 UTC m=+1161.187578685" lastFinishedPulling="2026-02-14 04:29:37.925214853 +0000 UTC m=+1210.006152167" observedRunningTime="2026-02-14 04:30:22.345095953 +0000 UTC m=+1254.426033257" watchObservedRunningTime="2026-02-14 04:30:22.380747158 +0000 UTC m=+1254.461684472" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.382300 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eda836b-4d69-49e8-a582-e29da56fd005" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "c755009c-2bb6-4f8f-9b53-460a0e4c9447" (UID: "c755009c-2bb6-4f8f-9b53-460a0e4c9447"). InnerVolumeSpecName "pvc-0eda836b-4d69-49e8-a582-e29da56fd005". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.392643 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371939.462154 podStartE2EDuration="1m37.392620933s" podCreationTimestamp="2026-02-14 04:28:45 +0000 UTC" firstStartedPulling="2026-02-14 04:28:47.783547537 +0000 UTC m=+1159.864484851" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:30:22.391838812 +0000 UTC m=+1254.472776146" watchObservedRunningTime="2026-02-14 04:30:22.392620933 +0000 UTC m=+1254.473558247" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.404870 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-web-config" (OuterVolumeSpecName: "web-config") pod "c755009c-2bb6-4f8f-9b53-460a0e4c9447" (UID: "c755009c-2bb6-4f8f-9b53-460a0e4c9447"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.430353 4867 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.430405 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0eda836b-4d69-49e8-a582-e29da56fd005\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eda836b-4d69-49e8-a582-e29da56fd005\") on node \"crc\" " Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.430420 4867 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-web-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.430431 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c755009c-2bb6-4f8f-9b53-460a0e4c9447-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.430442 4867 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c755009c-2bb6-4f8f-9b53-460a0e4c9447-config-out\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.430458 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpz8v\" (UniqueName: \"kubernetes.io/projected/c755009c-2bb6-4f8f-9b53-460a0e4c9447-kube-api-access-tpz8v\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.430473 4867 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c755009c-2bb6-4f8f-9b53-460a0e4c9447-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.446141 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=-9223371940.408657 podStartE2EDuration="1m36.44611959s" podCreationTimestamp="2026-02-14 04:28:46 +0000 UTC" firstStartedPulling="2026-02-14 04:28:49.444862718 +0000 UTC m=+1161.525800032" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:30:22.41326867 +0000 UTC m=+1254.494205984" watchObservedRunningTime="2026-02-14 04:30:22.44611959 +0000 UTC m=+1254.527056914" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.468475 4867 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.468661 4867 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0eda836b-4d69-49e8-a582-e29da56fd005" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eda836b-4d69-49e8-a582-e29da56fd005") on node "crc" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.535066 4867 reconciler_common.go:293] "Volume detached for volume \"pvc-0eda836b-4d69-49e8-a582-e29da56fd005\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eda836b-4d69-49e8-a582-e29da56fd005\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.692872 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.715325 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.733982 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 04:30:22 crc kubenswrapper[4867]: E0214 04:30:22.734525 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerName="config-reloader" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.734544 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerName="config-reloader" Feb 14 04:30:22 crc kubenswrapper[4867]: E0214 04:30:22.734561 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerName="prometheus" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.734569 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerName="prometheus" Feb 14 04:30:22 crc kubenswrapper[4867]: E0214 04:30:22.734584 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerName="init-config-reloader" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.734592 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerName="init-config-reloader" Feb 14 04:30:22 crc kubenswrapper[4867]: E0214 04:30:22.734607 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerName="thanos-sidecar" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.734613 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerName="thanos-sidecar" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.734819 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerName="thanos-sidecar" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.734843 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerName="config-reloader" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.734854 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" containerName="prometheus" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.736791 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.744634 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.744905 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.745069 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.745181 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.745290 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.745397 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.745632 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.745717 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-dgxf9" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.764295 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.767009 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.840400 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8c8003cd-8992-4714-96a2-2e649aead118-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.840622 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkzqt\" (UniqueName: \"kubernetes.io/projected/8c8003cd-8992-4714-96a2-2e649aead118-kube-api-access-vkzqt\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.840807 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.840849 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.840874 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8c8003cd-8992-4714-96a2-2e649aead118-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.840991 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.841059 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8c8003cd-8992-4714-96a2-2e649aead118-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.841152 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.841215 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0eda836b-4d69-49e8-a582-e29da56fd005\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eda836b-4d69-49e8-a582-e29da56fd005\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.841344 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8c8003cd-8992-4714-96a2-2e649aead118-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.841467 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.841601 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-config\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.841634 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8c8003cd-8992-4714-96a2-2e649aead118-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.943434 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8c8003cd-8992-4714-96a2-2e649aead118-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.943560 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkzqt\" (UniqueName: \"kubernetes.io/projected/8c8003cd-8992-4714-96a2-2e649aead118-kube-api-access-vkzqt\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.943620 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.943650 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.943672 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8c8003cd-8992-4714-96a2-2e649aead118-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.943705 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.943732 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8c8003cd-8992-4714-96a2-2e649aead118-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.943766 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.943793 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0eda836b-4d69-49e8-a582-e29da56fd005\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eda836b-4d69-49e8-a582-e29da56fd005\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.943833 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8c8003cd-8992-4714-96a2-2e649aead118-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.943868 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.943901 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-config\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.943917 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8c8003cd-8992-4714-96a2-2e649aead118-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: E0214 04:30:22.943901 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc755009c_2bb6_4f8f_9b53_460a0e4c9447.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc755009c_2bb6_4f8f_9b53_460a0e4c9447.slice/crio-bf0605b193983ab03177306fae17d696c18a8e3789f84b06d5ef6b3d006f8d77\": RecentStats: unable to find data in memory cache]" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.945297 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8c8003cd-8992-4714-96a2-2e649aead118-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.945822 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8c8003cd-8992-4714-96a2-2e649aead118-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.948603 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8c8003cd-8992-4714-96a2-2e649aead118-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.956390 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.960034 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8c8003cd-8992-4714-96a2-2e649aead118-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.963716 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8c8003cd-8992-4714-96a2-2e649aead118-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.965850 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.967109 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.971906 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-config\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.972321 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.990412 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8c8003cd-8992-4714-96a2-2e649aead118-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.994951 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkzqt\" (UniqueName: \"kubernetes.io/projected/8c8003cd-8992-4714-96a2-2e649aead118-kube-api-access-vkzqt\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.999263 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:30:22 crc kubenswrapper[4867]: I0214 04:30:22.999316 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0eda836b-4d69-49e8-a582-e29da56fd005\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eda836b-4d69-49e8-a582-e29da56fd005\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7c69566d4c941ca8a51b196b92114beed9536eafb9e04e7c441265c9a20c9feb/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:23 crc kubenswrapper[4867]: I0214 04:30:23.033675 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c755009c-2bb6-4f8f-9b53-460a0e4c9447" path="/var/lib/kubelet/pods/c755009c-2bb6-4f8f-9b53-460a0e4c9447/volumes" Feb 14 04:30:23 crc kubenswrapper[4867]: I0214 04:30:23.133046 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0eda836b-4d69-49e8-a582-e29da56fd005\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0eda836b-4d69-49e8-a582-e29da56fd005\") pod \"prometheus-metric-storage-0\" (UID: \"8c8003cd-8992-4714-96a2-2e649aead118\") " pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:23 crc kubenswrapper[4867]: I0214 04:30:23.149956 4867 scope.go:117] "RemoveContainer" containerID="c62b1e6f71da03f759075e45d595dab84ceabe23bcfb61adf4ba71561bb4ec1e" Feb 14 04:30:23 crc kubenswrapper[4867]: I0214 04:30:23.207100 4867 scope.go:117] "RemoveContainer" containerID="4692a5c730542a5c7abd2ae37dcefb0197b935ec9ce8b16d0469afd4527db7f5" Feb 14 04:30:23 crc kubenswrapper[4867]: I0214 04:30:23.235621 4867 scope.go:117] "RemoveContainer" containerID="a1fd36c74b9a00850c975f49583fd6e7537b5b3ab16d29f2ed2f5ae6fb4437b4" Feb 14 04:30:23 crc kubenswrapper[4867]: I0214 04:30:23.369702 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:23 crc kubenswrapper[4867]: I0214 04:30:23.376612 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-k62wg" event={"ID":"f0d44618-795d-4cc5-a98b-c0c5d77ffdcb","Type":"ContainerStarted","Data":"55a476092d45641169dd1f9b0ab4f579c321db2f63ef5761996c7ee620cab57b"} Feb 14 04:30:24 crc kubenswrapper[4867]: I0214 04:30:24.029548 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 14 04:30:24 crc kubenswrapper[4867]: I0214 04:30:24.390381 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"4e89a71e-e837-4d98-a707-27908a8342bc","Type":"ContainerStarted","Data":"46871adad84ae3334a9c8c1d7590115ccc3e6c56c62e9c431fc9f978e9e97ba6"} Feb 14 04:30:24 crc kubenswrapper[4867]: I0214 04:30:24.392760 4867 generic.go:334] "Generic (PLEG): container finished" podID="f0d44618-795d-4cc5-a98b-c0c5d77ffdcb" containerID="d05fe3ff5d6d0b733fa083ac07e6cf3331ccf5ca5bbba2a8f738913293195786" exitCode=0 Feb 14 04:30:24 crc kubenswrapper[4867]: I0214 04:30:24.392907 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-k62wg" event={"ID":"f0d44618-795d-4cc5-a98b-c0c5d77ffdcb","Type":"ContainerDied","Data":"d05fe3ff5d6d0b733fa083ac07e6cf3331ccf5ca5bbba2a8f738913293195786"} Feb 14 04:30:24 crc kubenswrapper[4867]: I0214 04:30:24.394212 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8c8003cd-8992-4714-96a2-2e649aead118","Type":"ContainerStarted","Data":"6ce12f713c335690a2513c8e5c41d62f6986ad5075f62ffd19b2415ed4452ee3"} Feb 14 04:30:24 crc kubenswrapper[4867]: I0214 04:30:24.397163 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d9f9909-1442-4d83-b2aa-0f58d4022338","Type":"ContainerStarted","Data":"8da0aba94a4a7f951a4223525abb26545d4de6b66bc664f318fbea2911415a00"} Feb 14 04:30:24 crc kubenswrapper[4867]: I0214 04:30:24.397522 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d9f9909-1442-4d83-b2aa-0f58d4022338","Type":"ContainerStarted","Data":"81fa65452a80b2b0c08b069b6dc00609e7d342372926d36aa52b686f77827908"} Feb 14 04:30:24 crc kubenswrapper[4867]: I0214 04:30:24.420019 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=3.226813423 podStartE2EDuration="15.420000695s" podCreationTimestamp="2026-02-14 04:30:09 +0000 UTC" firstStartedPulling="2026-02-14 04:30:11.015305349 +0000 UTC m=+1243.096242663" lastFinishedPulling="2026-02-14 04:30:23.208492621 +0000 UTC m=+1255.289429935" observedRunningTime="2026-02-14 04:30:24.414179641 +0000 UTC m=+1256.495116965" watchObservedRunningTime="2026-02-14 04:30:24.420000695 +0000 UTC m=+1256.500938009" Feb 14 04:30:25 crc kubenswrapper[4867]: I0214 04:30:25.783214 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-k62wg" Feb 14 04:30:25 crc kubenswrapper[4867]: I0214 04:30:25.928396 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6d562\" (UniqueName: \"kubernetes.io/projected/f0d44618-795d-4cc5-a98b-c0c5d77ffdcb-kube-api-access-6d562\") pod \"f0d44618-795d-4cc5-a98b-c0c5d77ffdcb\" (UID: \"f0d44618-795d-4cc5-a98b-c0c5d77ffdcb\") " Feb 14 04:30:25 crc kubenswrapper[4867]: I0214 04:30:25.928590 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0d44618-795d-4cc5-a98b-c0c5d77ffdcb-operator-scripts\") pod \"f0d44618-795d-4cc5-a98b-c0c5d77ffdcb\" (UID: \"f0d44618-795d-4cc5-a98b-c0c5d77ffdcb\") " Feb 14 04:30:25 crc kubenswrapper[4867]: I0214 04:30:25.928988 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0d44618-795d-4cc5-a98b-c0c5d77ffdcb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f0d44618-795d-4cc5-a98b-c0c5d77ffdcb" (UID: "f0d44618-795d-4cc5-a98b-c0c5d77ffdcb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:25 crc kubenswrapper[4867]: I0214 04:30:25.929730 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0d44618-795d-4cc5-a98b-c0c5d77ffdcb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:25 crc kubenswrapper[4867]: I0214 04:30:25.933494 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0d44618-795d-4cc5-a98b-c0c5d77ffdcb-kube-api-access-6d562" (OuterVolumeSpecName: "kube-api-access-6d562") pod "f0d44618-795d-4cc5-a98b-c0c5d77ffdcb" (UID: "f0d44618-795d-4cc5-a98b-c0c5d77ffdcb"). InnerVolumeSpecName "kube-api-access-6d562". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:26 crc kubenswrapper[4867]: I0214 04:30:26.031955 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6d562\" (UniqueName: \"kubernetes.io/projected/f0d44618-795d-4cc5-a98b-c0c5d77ffdcb-kube-api-access-6d562\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:26 crc kubenswrapper[4867]: I0214 04:30:26.421759 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d9f9909-1442-4d83-b2aa-0f58d4022338","Type":"ContainerStarted","Data":"efe6e04eebaa51a773f5ff3806454667c86de6e90aca56835882d6444f13caf6"} Feb 14 04:30:26 crc kubenswrapper[4867]: I0214 04:30:26.422015 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d9f9909-1442-4d83-b2aa-0f58d4022338","Type":"ContainerStarted","Data":"1d04d99377725127e69150ab5bcfe965bd9b461870f6da8cec3a2ff8ed034518"} Feb 14 04:30:26 crc kubenswrapper[4867]: I0214 04:30:26.422026 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d9f9909-1442-4d83-b2aa-0f58d4022338","Type":"ContainerStarted","Data":"686c73d533d973bb8153f7ef7326df85061774a9fc70120d3fa377b9e1387640"} Feb 14 04:30:26 crc kubenswrapper[4867]: I0214 04:30:26.422035 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d9f9909-1442-4d83-b2aa-0f58d4022338","Type":"ContainerStarted","Data":"821b3a185a45849e9a1559bbd36e59ca48eb8658d160347636a2e1ad9db67f35"} Feb 14 04:30:26 crc kubenswrapper[4867]: I0214 04:30:26.425014 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-k62wg" event={"ID":"f0d44618-795d-4cc5-a98b-c0c5d77ffdcb","Type":"ContainerDied","Data":"55a476092d45641169dd1f9b0ab4f579c321db2f63ef5761996c7ee620cab57b"} Feb 14 04:30:26 crc kubenswrapper[4867]: I0214 04:30:26.425046 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55a476092d45641169dd1f9b0ab4f579c321db2f63ef5761996c7ee620cab57b" Feb 14 04:30:26 crc kubenswrapper[4867]: I0214 04:30:26.425100 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-k62wg" Feb 14 04:30:28 crc kubenswrapper[4867]: I0214 04:30:28.462257 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8c8003cd-8992-4714-96a2-2e649aead118","Type":"ContainerStarted","Data":"f794e66a47539fff9a1617446ae42aa3d803d21ed1af7899fe2421e8ae424f52"} Feb 14 04:30:28 crc kubenswrapper[4867]: I0214 04:30:28.502355 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d9f9909-1442-4d83-b2aa-0f58d4022338","Type":"ContainerStarted","Data":"f3d0f45e0c8dcd27bf4c357f0760d7a88a21e4563646ed613a1f70b025d12de5"} Feb 14 04:30:28 crc kubenswrapper[4867]: I0214 04:30:28.502416 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d9f9909-1442-4d83-b2aa-0f58d4022338","Type":"ContainerStarted","Data":"c56227dc31954100c0a5590c97187517174534391f7211ff7f5c7d543338986c"} Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.519290 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d9f9909-1442-4d83-b2aa-0f58d4022338","Type":"ContainerStarted","Data":"e945d1f621042caf5ca683f5765ab964805633398e805c62b5aa6173921fddae"} Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.519794 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d9f9909-1442-4d83-b2aa-0f58d4022338","Type":"ContainerStarted","Data":"0739eae9cc92796e37e6cda70f62623c7572481a8b083af30af4fa79725c6a08"} Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.519809 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d9f9909-1442-4d83-b2aa-0f58d4022338","Type":"ContainerStarted","Data":"8d7a4565e255548858645306dc0d9afcb0a7904220d938de57a504f698e378c1"} Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.519818 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d9f9909-1442-4d83-b2aa-0f58d4022338","Type":"ContainerStarted","Data":"cd2da4f9efd152f80c27e4bd000bff2c0c53a6103369c05f9ed818f34dcba557"} Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.519828 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"1d9f9909-1442-4d83-b2aa-0f58d4022338","Type":"ContainerStarted","Data":"2969b37cbba482ef69bdafb4a19ad0556d600cc40f9bed477663d1cf485b5736"} Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.556389 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.465310755 podStartE2EDuration="55.556368372s" podCreationTimestamp="2026-02-14 04:29:34 +0000 UTC" firstStartedPulling="2026-02-14 04:30:08.458402764 +0000 UTC m=+1240.539340078" lastFinishedPulling="2026-02-14 04:30:27.549460381 +0000 UTC m=+1259.630397695" observedRunningTime="2026-02-14 04:30:29.555072568 +0000 UTC m=+1261.636009882" watchObservedRunningTime="2026-02-14 04:30:29.556368372 +0000 UTC m=+1261.637305686" Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.886829 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-sp44n"] Feb 14 04:30:29 crc kubenswrapper[4867]: E0214 04:30:29.887273 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0d44618-795d-4cc5-a98b-c0c5d77ffdcb" containerName="mariadb-account-create-update" Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.887293 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0d44618-795d-4cc5-a98b-c0c5d77ffdcb" containerName="mariadb-account-create-update" Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.887537 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0d44618-795d-4cc5-a98b-c0c5d77ffdcb" containerName="mariadb-account-create-update" Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.888675 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.890913 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.904657 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-sp44n"] Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.919596 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.919718 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.919780 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.919840 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh2g2\" (UniqueName: \"kubernetes.io/projected/e2d457dc-19b4-4279-8c97-930f91291f98-kube-api-access-xh2g2\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.919863 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-config\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:29 crc kubenswrapper[4867]: I0214 04:30:29.919914 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-dns-svc\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:30 crc kubenswrapper[4867]: I0214 04:30:30.022215 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:30 crc kubenswrapper[4867]: I0214 04:30:30.022307 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:30 crc kubenswrapper[4867]: I0214 04:30:30.022388 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh2g2\" (UniqueName: \"kubernetes.io/projected/e2d457dc-19b4-4279-8c97-930f91291f98-kube-api-access-xh2g2\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:30 crc kubenswrapper[4867]: I0214 04:30:30.022423 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-config\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:30 crc kubenswrapper[4867]: I0214 04:30:30.022489 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-dns-svc\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:30 crc kubenswrapper[4867]: I0214 04:30:30.022572 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:30 crc kubenswrapper[4867]: I0214 04:30:30.023170 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:30 crc kubenswrapper[4867]: I0214 04:30:30.023348 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:30 crc kubenswrapper[4867]: I0214 04:30:30.023569 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-dns-svc\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:30 crc kubenswrapper[4867]: I0214 04:30:30.023627 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:30 crc kubenswrapper[4867]: I0214 04:30:30.024115 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-config\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:30 crc kubenswrapper[4867]: I0214 04:30:30.042078 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh2g2\" (UniqueName: \"kubernetes.io/projected/e2d457dc-19b4-4279-8c97-930f91291f98-kube-api-access-xh2g2\") pod \"dnsmasq-dns-764c5664d7-sp44n\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:30 crc kubenswrapper[4867]: I0214 04:30:30.511763 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:31 crc kubenswrapper[4867]: I0214 04:30:31.063804 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-sp44n"] Feb 14 04:30:31 crc kubenswrapper[4867]: W0214 04:30:31.073752 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2d457dc_19b4_4279_8c97_930f91291f98.slice/crio-3bb4499423a21fd6e6abed1bb4c19b4b9bfd321a8e7779e3689cb78809defb85 WatchSource:0}: Error finding container 3bb4499423a21fd6e6abed1bb4c19b4b9bfd321a8e7779e3689cb78809defb85: Status 404 returned error can't find the container with id 3bb4499423a21fd6e6abed1bb4c19b4b9bfd321a8e7779e3689cb78809defb85 Feb 14 04:30:31 crc kubenswrapper[4867]: I0214 04:30:31.542865 4867 generic.go:334] "Generic (PLEG): container finished" podID="e2d457dc-19b4-4279-8c97-930f91291f98" containerID="3ce430069186ce26ff0516293d97e3eab6ca721fa6eae3b7d027a605885cee6e" exitCode=0 Feb 14 04:30:31 crc kubenswrapper[4867]: I0214 04:30:31.542968 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-sp44n" event={"ID":"e2d457dc-19b4-4279-8c97-930f91291f98","Type":"ContainerDied","Data":"3ce430069186ce26ff0516293d97e3eab6ca721fa6eae3b7d027a605885cee6e"} Feb 14 04:30:31 crc kubenswrapper[4867]: I0214 04:30:31.543167 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-sp44n" event={"ID":"e2d457dc-19b4-4279-8c97-930f91291f98","Type":"ContainerStarted","Data":"3bb4499423a21fd6e6abed1bb4c19b4b9bfd321a8e7779e3689cb78809defb85"} Feb 14 04:30:32 crc kubenswrapper[4867]: I0214 04:30:32.555658 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-sp44n" event={"ID":"e2d457dc-19b4-4279-8c97-930f91291f98","Type":"ContainerStarted","Data":"cfefeb2b897af2fb3d5d274167a23f6d2bce6f0ba7bf17c5af7d0be9357e047c"} Feb 14 04:30:32 crc kubenswrapper[4867]: I0214 04:30:32.556847 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:32 crc kubenswrapper[4867]: I0214 04:30:32.578477 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-sp44n" podStartSLOduration=3.578457613 podStartE2EDuration="3.578457613s" podCreationTimestamp="2026-02-14 04:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:30:32.572886976 +0000 UTC m=+1264.653824300" watchObservedRunningTime="2026-02-14 04:30:32.578457613 +0000 UTC m=+1264.659394927" Feb 14 04:30:34 crc kubenswrapper[4867]: I0214 04:30:34.574622 4867 generic.go:334] "Generic (PLEG): container finished" podID="8c8003cd-8992-4714-96a2-2e649aead118" containerID="f794e66a47539fff9a1617446ae42aa3d803d21ed1af7899fe2421e8ae424f52" exitCode=0 Feb 14 04:30:34 crc kubenswrapper[4867]: I0214 04:30:34.574684 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8c8003cd-8992-4714-96a2-2e649aead118","Type":"ContainerDied","Data":"f794e66a47539fff9a1617446ae42aa3d803d21ed1af7899fe2421e8ae424f52"} Feb 14 04:30:35 crc kubenswrapper[4867]: I0214 04:30:35.585643 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8c8003cd-8992-4714-96a2-2e649aead118","Type":"ContainerStarted","Data":"bd8e323aacf47614946e6e8abe299298cfd77b36b733c7407e17376f135951d2"} Feb 14 04:30:37 crc kubenswrapper[4867]: I0214 04:30:37.213840 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:30:37 crc kubenswrapper[4867]: I0214 04:30:37.920288 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="647ba30a-5526-4e27-9095-680c31ff4eb3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.127:5671: connect: connection refused" Feb 14 04:30:38 crc kubenswrapper[4867]: I0214 04:30:38.058650 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="9bba5174-edd6-4e59-8b84-6c50439be88e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Feb 14 04:30:38 crc kubenswrapper[4867]: I0214 04:30:38.252767 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="6bc83863-74f4-4509-969c-0f3305a542a8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.128:5671: connect: connection refused" Feb 14 04:30:38 crc kubenswrapper[4867]: I0214 04:30:38.663649 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8c8003cd-8992-4714-96a2-2e649aead118","Type":"ContainerStarted","Data":"0b63afd8dc2a0d8476b73e2da3aa351e15e56e5caa4dd0a689dae875878c5456"} Feb 14 04:30:39 crc kubenswrapper[4867]: I0214 04:30:39.002747 4867 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a] : Timed out while waiting for systemd to remove kubepods-besteffort-pod2e27a3cb_c301_4fa0_b9a1_9aa3bac0305a.slice" Feb 14 04:30:39 crc kubenswrapper[4867]: E0214 04:30:39.002818 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a] : Timed out while waiting for systemd to remove kubepods-besteffort-pod2e27a3cb_c301_4fa0_b9a1_9aa3bac0305a.slice" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k" podUID="2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a" Feb 14 04:30:39 crc kubenswrapper[4867]: I0214 04:30:39.675690 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gzvxs" event={"ID":"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2","Type":"ContainerStarted","Data":"cbf0ef6610c1740254fda0700aa42a6fdd3885fcc7d65e0c4bc4ef1fc1f78288"} Feb 14 04:30:39 crc kubenswrapper[4867]: I0214 04:30:39.680872 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k" Feb 14 04:30:39 crc kubenswrapper[4867]: I0214 04:30:39.681648 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8c8003cd-8992-4714-96a2-2e649aead118","Type":"ContainerStarted","Data":"bd79661dacb3f2e02a3379c14a86446419599ff01a98ca85f72ac077fb6c5343"} Feb 14 04:30:39 crc kubenswrapper[4867]: I0214 04:30:39.720094 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-gzvxs" podStartSLOduration=2.858651086 podStartE2EDuration="38.720066978s" podCreationTimestamp="2026-02-14 04:30:01 +0000 UTC" firstStartedPulling="2026-02-14 04:30:02.78932871 +0000 UTC m=+1234.870266024" lastFinishedPulling="2026-02-14 04:30:38.650744602 +0000 UTC m=+1270.731681916" observedRunningTime="2026-02-14 04:30:39.695091756 +0000 UTC m=+1271.776029070" watchObservedRunningTime="2026-02-14 04:30:39.720066978 +0000 UTC m=+1271.801004292" Feb 14 04:30:39 crc kubenswrapper[4867]: I0214 04:30:39.738778 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=17.738759534 podStartE2EDuration="17.738759534s" podCreationTimestamp="2026-02-14 04:30:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:30:39.728912583 +0000 UTC m=+1271.809849897" watchObservedRunningTime="2026-02-14 04:30:39.738759534 +0000 UTC m=+1271.819696848" Feb 14 04:30:40 crc kubenswrapper[4867]: I0214 04:30:40.514704 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:30:40 crc kubenswrapper[4867]: I0214 04:30:40.571352 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-cp76f"] Feb 14 04:30:40 crc kubenswrapper[4867]: I0214 04:30:40.578844 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-cp76f" podUID="af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7" containerName="dnsmasq-dns" containerID="cri-o://287c9079ac6589b0605e06afbed45de89a1a8760239c3526cc0564c3247ada73" gracePeriod=10 Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.133176 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.233033 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-dns-svc\") pod \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.233269 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-ovsdbserver-sb\") pod \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.233308 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gndq6\" (UniqueName: \"kubernetes.io/projected/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-kube-api-access-gndq6\") pod \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.233423 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-ovsdbserver-nb\") pod \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.233493 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-config\") pod \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\" (UID: \"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7\") " Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.252360 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-kube-api-access-gndq6" (OuterVolumeSpecName: "kube-api-access-gndq6") pod "af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7" (UID: "af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7"). InnerVolumeSpecName "kube-api-access-gndq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.293947 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7" (UID: "af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.310136 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7" (UID: "af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.310306 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-config" (OuterVolumeSpecName: "config") pod "af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7" (UID: "af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.314212 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7" (UID: "af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.335649 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.336433 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gndq6\" (UniqueName: \"kubernetes.io/projected/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-kube-api-access-gndq6\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.336565 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.336643 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.336736 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.712125 4867 generic.go:334] "Generic (PLEG): container finished" podID="af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7" containerID="287c9079ac6589b0605e06afbed45de89a1a8760239c3526cc0564c3247ada73" exitCode=0 Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.712430 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-cp76f" event={"ID":"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7","Type":"ContainerDied","Data":"287c9079ac6589b0605e06afbed45de89a1a8760239c3526cc0564c3247ada73"} Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.712541 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-cp76f" event={"ID":"af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7","Type":"ContainerDied","Data":"41aaccd20d5bf4daeae755d0c155b427f29d56138b6d3562c58792965bd5ee9b"} Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.712621 4867 scope.go:117] "RemoveContainer" containerID="287c9079ac6589b0605e06afbed45de89a1a8760239c3526cc0564c3247ada73" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.712828 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-cp76f" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.762210 4867 scope.go:117] "RemoveContainer" containerID="06776f7c91b51ef4ae24e9a96a1d7ce732c0aeceef3722062fef6d1c2167d74f" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.775393 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-cp76f"] Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.784712 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-cp76f"] Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.786167 4867 scope.go:117] "RemoveContainer" containerID="287c9079ac6589b0605e06afbed45de89a1a8760239c3526cc0564c3247ada73" Feb 14 04:30:41 crc kubenswrapper[4867]: E0214 04:30:41.786639 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"287c9079ac6589b0605e06afbed45de89a1a8760239c3526cc0564c3247ada73\": container with ID starting with 287c9079ac6589b0605e06afbed45de89a1a8760239c3526cc0564c3247ada73 not found: ID does not exist" containerID="287c9079ac6589b0605e06afbed45de89a1a8760239c3526cc0564c3247ada73" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.786684 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"287c9079ac6589b0605e06afbed45de89a1a8760239c3526cc0564c3247ada73"} err="failed to get container status \"287c9079ac6589b0605e06afbed45de89a1a8760239c3526cc0564c3247ada73\": rpc error: code = NotFound desc = could not find container \"287c9079ac6589b0605e06afbed45de89a1a8760239c3526cc0564c3247ada73\": container with ID starting with 287c9079ac6589b0605e06afbed45de89a1a8760239c3526cc0564c3247ada73 not found: ID does not exist" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.786711 4867 scope.go:117] "RemoveContainer" containerID="06776f7c91b51ef4ae24e9a96a1d7ce732c0aeceef3722062fef6d1c2167d74f" Feb 14 04:30:41 crc kubenswrapper[4867]: E0214 04:30:41.787017 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06776f7c91b51ef4ae24e9a96a1d7ce732c0aeceef3722062fef6d1c2167d74f\": container with ID starting with 06776f7c91b51ef4ae24e9a96a1d7ce732c0aeceef3722062fef6d1c2167d74f not found: ID does not exist" containerID="06776f7c91b51ef4ae24e9a96a1d7ce732c0aeceef3722062fef6d1c2167d74f" Feb 14 04:30:41 crc kubenswrapper[4867]: I0214 04:30:41.787052 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06776f7c91b51ef4ae24e9a96a1d7ce732c0aeceef3722062fef6d1c2167d74f"} err="failed to get container status \"06776f7c91b51ef4ae24e9a96a1d7ce732c0aeceef3722062fef6d1c2167d74f\": rpc error: code = NotFound desc = could not find container \"06776f7c91b51ef4ae24e9a96a1d7ce732c0aeceef3722062fef6d1c2167d74f\": container with ID starting with 06776f7c91b51ef4ae24e9a96a1d7ce732c0aeceef3722062fef6d1c2167d74f not found: ID does not exist" Feb 14 04:30:43 crc kubenswrapper[4867]: I0214 04:30:43.009368 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7" path="/var/lib/kubelet/pods/af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7/volumes" Feb 14 04:30:43 crc kubenswrapper[4867]: I0214 04:30:43.370640 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:46 crc kubenswrapper[4867]: I0214 04:30:46.771102 4867 generic.go:334] "Generic (PLEG): container finished" podID="e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2" containerID="cbf0ef6610c1740254fda0700aa42a6fdd3885fcc7d65e0c4bc4ef1fc1f78288" exitCode=0 Feb 14 04:30:46 crc kubenswrapper[4867]: I0214 04:30:46.771207 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gzvxs" event={"ID":"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2","Type":"ContainerDied","Data":"cbf0ef6610c1740254fda0700aa42a6fdd3885fcc7d65e0c4bc4ef1fc1f78288"} Feb 14 04:30:47 crc kubenswrapper[4867]: I0214 04:30:47.920683 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.057742 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.257288 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.354672 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gzvxs" Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.427051 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-db-sync-config-data\") pod \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\" (UID: \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\") " Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.427660 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-combined-ca-bundle\") pod \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\" (UID: \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\") " Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.427885 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlznh\" (UniqueName: \"kubernetes.io/projected/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-kube-api-access-wlznh\") pod \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\" (UID: \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\") " Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.428045 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-config-data\") pod \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\" (UID: \"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2\") " Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.434956 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2" (UID: "e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.434994 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-kube-api-access-wlznh" (OuterVolumeSpecName: "kube-api-access-wlznh") pod "e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2" (UID: "e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2"). InnerVolumeSpecName "kube-api-access-wlznh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.467447 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2" (UID: "e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.501420 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-config-data" (OuterVolumeSpecName: "config-data") pod "e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2" (UID: "e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.534737 4867 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.534777 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.534788 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlznh\" (UniqueName: \"kubernetes.io/projected/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-kube-api-access-wlznh\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.534801 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.794852 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gzvxs" event={"ID":"e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2","Type":"ContainerDied","Data":"a9f2241d04b1388d688071a01711ae33a99077041ba77e2f0164bc2d8abe8d1e"} Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.794902 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9f2241d04b1388d688071a01711ae33a99077041ba77e2f0164bc2d8abe8d1e" Feb 14 04:30:48 crc kubenswrapper[4867]: I0214 04:30:48.794946 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gzvxs" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.259892 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-shjcj"] Feb 14 04:30:49 crc kubenswrapper[4867]: E0214 04:30:49.260635 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7" containerName="dnsmasq-dns" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.260650 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7" containerName="dnsmasq-dns" Feb 14 04:30:49 crc kubenswrapper[4867]: E0214 04:30:49.260671 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2" containerName="glance-db-sync" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.260678 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2" containerName="glance-db-sync" Feb 14 04:30:49 crc kubenswrapper[4867]: E0214 04:30:49.260702 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7" containerName="init" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.260707 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7" containerName="init" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.260895 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2" containerName="glance-db-sync" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.260920 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="af541ba1-416f-49bd-a2cf-e3cc9a0eb3e7" containerName="dnsmasq-dns" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.262111 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.279776 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-shjcj"] Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.352965 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.353690 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.353942 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvg6h\" (UniqueName: \"kubernetes.io/projected/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-kube-api-access-bvg6h\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.354047 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.354348 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-config\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.354695 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.457496 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvg6h\" (UniqueName: \"kubernetes.io/projected/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-kube-api-access-bvg6h\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.457570 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.457670 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-config\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.457709 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.457740 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.457778 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.458424 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.458563 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-config\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.459075 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.459265 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.459396 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.479026 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvg6h\" (UniqueName: \"kubernetes.io/projected/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-kube-api-access-bvg6h\") pod \"dnsmasq-dns-74f6bcbc87-shjcj\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:49 crc kubenswrapper[4867]: I0214 04:30:49.583989 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.133980 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-shjcj"] Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.655584 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-9vmb7"] Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.657243 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9vmb7" Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.671676 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-9vmb7"] Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.790282 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n9np\" (UniqueName: \"kubernetes.io/projected/f90d34b6-263e-4515-a13a-a41fda1c40ca-kube-api-access-5n9np\") pod \"cinder-db-create-9vmb7\" (UID: \"f90d34b6-263e-4515-a13a-a41fda1c40ca\") " pod="openstack/cinder-db-create-9vmb7" Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.790666 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f90d34b6-263e-4515-a13a-a41fda1c40ca-operator-scripts\") pod \"cinder-db-create-9vmb7\" (UID: \"f90d34b6-263e-4515-a13a-a41fda1c40ca\") " pod="openstack/cinder-db-create-9vmb7" Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.832628 4867 generic.go:334] "Generic (PLEG): container finished" podID="34e3aca5-c7d4-4401-b301-1ab6497cb1d7" containerID="16409e89382c3b3bacc54f4af34e446329e86ddc39bf082ba4bf9fe2d118dfb6" exitCode=0 Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.832927 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" event={"ID":"34e3aca5-c7d4-4401-b301-1ab6497cb1d7","Type":"ContainerDied","Data":"16409e89382c3b3bacc54f4af34e446329e86ddc39bf082ba4bf9fe2d118dfb6"} Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.833038 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" event={"ID":"34e3aca5-c7d4-4401-b301-1ab6497cb1d7","Type":"ContainerStarted","Data":"ebbc4da8bb363e9a0155ec0e870c82eae82810ab31f3b604e5582d38957c9d4d"} Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.901158 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n9np\" (UniqueName: \"kubernetes.io/projected/f90d34b6-263e-4515-a13a-a41fda1c40ca-kube-api-access-5n9np\") pod \"cinder-db-create-9vmb7\" (UID: \"f90d34b6-263e-4515-a13a-a41fda1c40ca\") " pod="openstack/cinder-db-create-9vmb7" Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.901215 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f90d34b6-263e-4515-a13a-a41fda1c40ca-operator-scripts\") pod \"cinder-db-create-9vmb7\" (UID: \"f90d34b6-263e-4515-a13a-a41fda1c40ca\") " pod="openstack/cinder-db-create-9vmb7" Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.902695 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f90d34b6-263e-4515-a13a-a41fda1c40ca-operator-scripts\") pod \"cinder-db-create-9vmb7\" (UID: \"f90d34b6-263e-4515-a13a-a41fda1c40ca\") " pod="openstack/cinder-db-create-9vmb7" Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.935765 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n9np\" (UniqueName: \"kubernetes.io/projected/f90d34b6-263e-4515-a13a-a41fda1c40ca-kube-api-access-5n9np\") pod \"cinder-db-create-9vmb7\" (UID: \"f90d34b6-263e-4515-a13a-a41fda1c40ca\") " pod="openstack/cinder-db-create-9vmb7" Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.937113 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-f62v7"] Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.938915 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-f62v7" Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.955456 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-f62v7"] Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.977931 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9vmb7" Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.996620 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-fad3-account-create-update-zwwh5"] Feb 14 04:30:50 crc kubenswrapper[4867]: I0214 04:30:50.998087 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fad3-account-create-update-zwwh5" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.002534 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c993d62-94a7-4903-b984-adcef36b53b8-operator-scripts\") pod \"heat-db-create-f62v7\" (UID: \"9c993d62-94a7-4903-b984-adcef36b53b8\") " pod="openstack/heat-db-create-f62v7" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.002734 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhjfs\" (UniqueName: \"kubernetes.io/projected/9c993d62-94a7-4903-b984-adcef36b53b8-kube-api-access-fhjfs\") pod \"heat-db-create-f62v7\" (UID: \"9c993d62-94a7-4903-b984-adcef36b53b8\") " pod="openstack/heat-db-create-f62v7" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.007489 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.074839 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fad3-account-create-update-zwwh5"] Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.104148 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd001336-81f9-43f6-9540-432047e6c98a-operator-scripts\") pod \"cinder-fad3-account-create-update-zwwh5\" (UID: \"bd001336-81f9-43f6-9540-432047e6c98a\") " pod="openstack/cinder-fad3-account-create-update-zwwh5" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.104296 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c993d62-94a7-4903-b984-adcef36b53b8-operator-scripts\") pod \"heat-db-create-f62v7\" (UID: \"9c993d62-94a7-4903-b984-adcef36b53b8\") " pod="openstack/heat-db-create-f62v7" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.104327 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhjfs\" (UniqueName: \"kubernetes.io/projected/9c993d62-94a7-4903-b984-adcef36b53b8-kube-api-access-fhjfs\") pod \"heat-db-create-f62v7\" (UID: \"9c993d62-94a7-4903-b984-adcef36b53b8\") " pod="openstack/heat-db-create-f62v7" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.104354 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx5sc\" (UniqueName: \"kubernetes.io/projected/bd001336-81f9-43f6-9540-432047e6c98a-kube-api-access-nx5sc\") pod \"cinder-fad3-account-create-update-zwwh5\" (UID: \"bd001336-81f9-43f6-9540-432047e6c98a\") " pod="openstack/cinder-fad3-account-create-update-zwwh5" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.105158 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c993d62-94a7-4903-b984-adcef36b53b8-operator-scripts\") pod \"heat-db-create-f62v7\" (UID: \"9c993d62-94a7-4903-b984-adcef36b53b8\") " pod="openstack/heat-db-create-f62v7" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.127306 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhjfs\" (UniqueName: \"kubernetes.io/projected/9c993d62-94a7-4903-b984-adcef36b53b8-kube-api-access-fhjfs\") pod \"heat-db-create-f62v7\" (UID: \"9c993d62-94a7-4903-b984-adcef36b53b8\") " pod="openstack/heat-db-create-f62v7" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.152614 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-gk75z"] Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.153964 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gk75z" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.159247 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.159524 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.159644 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.161864 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ffvbq" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.172042 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-gk75z"] Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.206493 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx5sc\" (UniqueName: \"kubernetes.io/projected/bd001336-81f9-43f6-9540-432047e6c98a-kube-api-access-nx5sc\") pod \"cinder-fad3-account-create-update-zwwh5\" (UID: \"bd001336-81f9-43f6-9540-432047e6c98a\") " pod="openstack/cinder-fad3-account-create-update-zwwh5" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.206640 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd001336-81f9-43f6-9540-432047e6c98a-operator-scripts\") pod \"cinder-fad3-account-create-update-zwwh5\" (UID: \"bd001336-81f9-43f6-9540-432047e6c98a\") " pod="openstack/cinder-fad3-account-create-update-zwwh5" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.208066 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd001336-81f9-43f6-9540-432047e6c98a-operator-scripts\") pod \"cinder-fad3-account-create-update-zwwh5\" (UID: \"bd001336-81f9-43f6-9540-432047e6c98a\") " pod="openstack/cinder-fad3-account-create-update-zwwh5" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.231108 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx5sc\" (UniqueName: \"kubernetes.io/projected/bd001336-81f9-43f6-9540-432047e6c98a-kube-api-access-nx5sc\") pod \"cinder-fad3-account-create-update-zwwh5\" (UID: \"bd001336-81f9-43f6-9540-432047e6c98a\") " pod="openstack/cinder-fad3-account-create-update-zwwh5" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.246258 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-8zqfs"] Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.248099 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8zqfs" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.268041 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-8zqfs"] Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.283009 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-3b6b-account-create-update-74g2s"] Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.284674 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3b6b-account-create-update-74g2s" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.286582 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.293955 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3b6b-account-create-update-74g2s"] Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.308655 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49af28f1-d33f-4717-81a7-4377bfef388c-combined-ca-bundle\") pod \"keystone-db-sync-gk75z\" (UID: \"49af28f1-d33f-4717-81a7-4377bfef388c\") " pod="openstack/keystone-db-sync-gk75z" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.308710 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9gd9\" (UniqueName: \"kubernetes.io/projected/49af28f1-d33f-4717-81a7-4377bfef388c-kube-api-access-j9gd9\") pod \"keystone-db-sync-gk75z\" (UID: \"49af28f1-d33f-4717-81a7-4377bfef388c\") " pod="openstack/keystone-db-sync-gk75z" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.308803 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c14b9ea2-b4ee-4365-8b77-d58ff122fabb-operator-scripts\") pod \"neutron-db-create-8zqfs\" (UID: \"c14b9ea2-b4ee-4365-8b77-d58ff122fabb\") " pod="openstack/neutron-db-create-8zqfs" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.308842 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrw7v\" (UniqueName: \"kubernetes.io/projected/c14b9ea2-b4ee-4365-8b77-d58ff122fabb-kube-api-access-zrw7v\") pod \"neutron-db-create-8zqfs\" (UID: \"c14b9ea2-b4ee-4365-8b77-d58ff122fabb\") " pod="openstack/neutron-db-create-8zqfs" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.308888 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49af28f1-d33f-4717-81a7-4377bfef388c-config-data\") pod \"keystone-db-sync-gk75z\" (UID: \"49af28f1-d33f-4717-81a7-4377bfef388c\") " pod="openstack/keystone-db-sync-gk75z" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.354710 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-7kcws"] Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.356373 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7kcws" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.367658 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-bab0-account-create-update-kmfpg"] Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.372764 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bab0-account-create-update-kmfpg" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.374494 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.380777 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-7kcws"] Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.395726 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bab0-account-create-update-kmfpg"] Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.399544 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-f62v7" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.410713 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c14b9ea2-b4ee-4365-8b77-d58ff122fabb-operator-scripts\") pod \"neutron-db-create-8zqfs\" (UID: \"c14b9ea2-b4ee-4365-8b77-d58ff122fabb\") " pod="openstack/neutron-db-create-8zqfs" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.410761 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7srfx\" (UniqueName: \"kubernetes.io/projected/d1f3a1a1-5734-4782-98e1-1eb22cfbdf93-kube-api-access-7srfx\") pod \"barbican-db-create-7kcws\" (UID: \"d1f3a1a1-5734-4782-98e1-1eb22cfbdf93\") " pod="openstack/barbican-db-create-7kcws" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.410790 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jlgg\" (UniqueName: \"kubernetes.io/projected/6961722f-b14d-42f2-bd56-68686c2e8a9a-kube-api-access-7jlgg\") pod \"barbican-3b6b-account-create-update-74g2s\" (UID: \"6961722f-b14d-42f2-bd56-68686c2e8a9a\") " pod="openstack/barbican-3b6b-account-create-update-74g2s" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.410820 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrw7v\" (UniqueName: \"kubernetes.io/projected/c14b9ea2-b4ee-4365-8b77-d58ff122fabb-kube-api-access-zrw7v\") pod \"neutron-db-create-8zqfs\" (UID: \"c14b9ea2-b4ee-4365-8b77-d58ff122fabb\") " pod="openstack/neutron-db-create-8zqfs" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.411119 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49af28f1-d33f-4717-81a7-4377bfef388c-config-data\") pod \"keystone-db-sync-gk75z\" (UID: \"49af28f1-d33f-4717-81a7-4377bfef388c\") " pod="openstack/keystone-db-sync-gk75z" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.411222 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6961722f-b14d-42f2-bd56-68686c2e8a9a-operator-scripts\") pod \"barbican-3b6b-account-create-update-74g2s\" (UID: \"6961722f-b14d-42f2-bd56-68686c2e8a9a\") " pod="openstack/barbican-3b6b-account-create-update-74g2s" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.411419 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49af28f1-d33f-4717-81a7-4377bfef388c-combined-ca-bundle\") pod \"keystone-db-sync-gk75z\" (UID: \"49af28f1-d33f-4717-81a7-4377bfef388c\") " pod="openstack/keystone-db-sync-gk75z" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.411449 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c14b9ea2-b4ee-4365-8b77-d58ff122fabb-operator-scripts\") pod \"neutron-db-create-8zqfs\" (UID: \"c14b9ea2-b4ee-4365-8b77-d58ff122fabb\") " pod="openstack/neutron-db-create-8zqfs" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.411475 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9gd9\" (UniqueName: \"kubernetes.io/projected/49af28f1-d33f-4717-81a7-4377bfef388c-kube-api-access-j9gd9\") pod \"keystone-db-sync-gk75z\" (UID: \"49af28f1-d33f-4717-81a7-4377bfef388c\") " pod="openstack/keystone-db-sync-gk75z" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.411499 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1f3a1a1-5734-4782-98e1-1eb22cfbdf93-operator-scripts\") pod \"barbican-db-create-7kcws\" (UID: \"d1f3a1a1-5734-4782-98e1-1eb22cfbdf93\") " pod="openstack/barbican-db-create-7kcws" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.415876 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49af28f1-d33f-4717-81a7-4377bfef388c-config-data\") pod \"keystone-db-sync-gk75z\" (UID: \"49af28f1-d33f-4717-81a7-4377bfef388c\") " pod="openstack/keystone-db-sync-gk75z" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.416078 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49af28f1-d33f-4717-81a7-4377bfef388c-combined-ca-bundle\") pod \"keystone-db-sync-gk75z\" (UID: \"49af28f1-d33f-4717-81a7-4377bfef388c\") " pod="openstack/keystone-db-sync-gk75z" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.433473 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fad3-account-create-update-zwwh5" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.437117 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9gd9\" (UniqueName: \"kubernetes.io/projected/49af28f1-d33f-4717-81a7-4377bfef388c-kube-api-access-j9gd9\") pod \"keystone-db-sync-gk75z\" (UID: \"49af28f1-d33f-4717-81a7-4377bfef388c\") " pod="openstack/keystone-db-sync-gk75z" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.442129 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrw7v\" (UniqueName: \"kubernetes.io/projected/c14b9ea2-b4ee-4365-8b77-d58ff122fabb-kube-api-access-zrw7v\") pod \"neutron-db-create-8zqfs\" (UID: \"c14b9ea2-b4ee-4365-8b77-d58ff122fabb\") " pod="openstack/neutron-db-create-8zqfs" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.473261 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-07f7-account-create-update-k24c7"] Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.474775 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-07f7-account-create-update-k24c7" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.484341 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gk75z" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.484605 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.486571 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-07f7-account-create-update-k24c7"] Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.513122 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1f3a1a1-5734-4782-98e1-1eb22cfbdf93-operator-scripts\") pod \"barbican-db-create-7kcws\" (UID: \"d1f3a1a1-5734-4782-98e1-1eb22cfbdf93\") " pod="openstack/barbican-db-create-7kcws" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.513254 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c5e9025-3781-4461-98d7-0d0d72c3b59b-operator-scripts\") pod \"neutron-bab0-account-create-update-kmfpg\" (UID: \"2c5e9025-3781-4461-98d7-0d0d72c3b59b\") " pod="openstack/neutron-bab0-account-create-update-kmfpg" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.513336 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7srfx\" (UniqueName: \"kubernetes.io/projected/d1f3a1a1-5734-4782-98e1-1eb22cfbdf93-kube-api-access-7srfx\") pod \"barbican-db-create-7kcws\" (UID: \"d1f3a1a1-5734-4782-98e1-1eb22cfbdf93\") " pod="openstack/barbican-db-create-7kcws" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.513373 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jlgg\" (UniqueName: \"kubernetes.io/projected/6961722f-b14d-42f2-bd56-68686c2e8a9a-kube-api-access-7jlgg\") pod \"barbican-3b6b-account-create-update-74g2s\" (UID: \"6961722f-b14d-42f2-bd56-68686c2e8a9a\") " pod="openstack/barbican-3b6b-account-create-update-74g2s" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.513454 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz4mb\" (UniqueName: \"kubernetes.io/projected/2c5e9025-3781-4461-98d7-0d0d72c3b59b-kube-api-access-sz4mb\") pod \"neutron-bab0-account-create-update-kmfpg\" (UID: \"2c5e9025-3781-4461-98d7-0d0d72c3b59b\") " pod="openstack/neutron-bab0-account-create-update-kmfpg" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.513542 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6961722f-b14d-42f2-bd56-68686c2e8a9a-operator-scripts\") pod \"barbican-3b6b-account-create-update-74g2s\" (UID: \"6961722f-b14d-42f2-bd56-68686c2e8a9a\") " pod="openstack/barbican-3b6b-account-create-update-74g2s" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.514458 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6961722f-b14d-42f2-bd56-68686c2e8a9a-operator-scripts\") pod \"barbican-3b6b-account-create-update-74g2s\" (UID: \"6961722f-b14d-42f2-bd56-68686c2e8a9a\") " pod="openstack/barbican-3b6b-account-create-update-74g2s" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.514969 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1f3a1a1-5734-4782-98e1-1eb22cfbdf93-operator-scripts\") pod \"barbican-db-create-7kcws\" (UID: \"d1f3a1a1-5734-4782-98e1-1eb22cfbdf93\") " pod="openstack/barbican-db-create-7kcws" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.534274 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7srfx\" (UniqueName: \"kubernetes.io/projected/d1f3a1a1-5734-4782-98e1-1eb22cfbdf93-kube-api-access-7srfx\") pod \"barbican-db-create-7kcws\" (UID: \"d1f3a1a1-5734-4782-98e1-1eb22cfbdf93\") " pod="openstack/barbican-db-create-7kcws" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.549202 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jlgg\" (UniqueName: \"kubernetes.io/projected/6961722f-b14d-42f2-bd56-68686c2e8a9a-kube-api-access-7jlgg\") pod \"barbican-3b6b-account-create-update-74g2s\" (UID: \"6961722f-b14d-42f2-bd56-68686c2e8a9a\") " pod="openstack/barbican-3b6b-account-create-update-74g2s" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.585098 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8zqfs" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.604348 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3b6b-account-create-update-74g2s" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.615396 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c5e9025-3781-4461-98d7-0d0d72c3b59b-operator-scripts\") pod \"neutron-bab0-account-create-update-kmfpg\" (UID: \"2c5e9025-3781-4461-98d7-0d0d72c3b59b\") " pod="openstack/neutron-bab0-account-create-update-kmfpg" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.615572 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz4mb\" (UniqueName: \"kubernetes.io/projected/2c5e9025-3781-4461-98d7-0d0d72c3b59b-kube-api-access-sz4mb\") pod \"neutron-bab0-account-create-update-kmfpg\" (UID: \"2c5e9025-3781-4461-98d7-0d0d72c3b59b\") " pod="openstack/neutron-bab0-account-create-update-kmfpg" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.615627 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5vl8\" (UniqueName: \"kubernetes.io/projected/b1826e5b-3563-455f-9caf-9c4ee203210f-kube-api-access-t5vl8\") pod \"heat-07f7-account-create-update-k24c7\" (UID: \"b1826e5b-3563-455f-9caf-9c4ee203210f\") " pod="openstack/heat-07f7-account-create-update-k24c7" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.615666 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1826e5b-3563-455f-9caf-9c4ee203210f-operator-scripts\") pod \"heat-07f7-account-create-update-k24c7\" (UID: \"b1826e5b-3563-455f-9caf-9c4ee203210f\") " pod="openstack/heat-07f7-account-create-update-k24c7" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.616685 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c5e9025-3781-4461-98d7-0d0d72c3b59b-operator-scripts\") pod \"neutron-bab0-account-create-update-kmfpg\" (UID: \"2c5e9025-3781-4461-98d7-0d0d72c3b59b\") " pod="openstack/neutron-bab0-account-create-update-kmfpg" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.630355 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-9vmb7"] Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.641175 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz4mb\" (UniqueName: \"kubernetes.io/projected/2c5e9025-3781-4461-98d7-0d0d72c3b59b-kube-api-access-sz4mb\") pod \"neutron-bab0-account-create-update-kmfpg\" (UID: \"2c5e9025-3781-4461-98d7-0d0d72c3b59b\") " pod="openstack/neutron-bab0-account-create-update-kmfpg" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.689535 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7kcws" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.699778 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bab0-account-create-update-kmfpg" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.717522 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5vl8\" (UniqueName: \"kubernetes.io/projected/b1826e5b-3563-455f-9caf-9c4ee203210f-kube-api-access-t5vl8\") pod \"heat-07f7-account-create-update-k24c7\" (UID: \"b1826e5b-3563-455f-9caf-9c4ee203210f\") " pod="openstack/heat-07f7-account-create-update-k24c7" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.717585 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1826e5b-3563-455f-9caf-9c4ee203210f-operator-scripts\") pod \"heat-07f7-account-create-update-k24c7\" (UID: \"b1826e5b-3563-455f-9caf-9c4ee203210f\") " pod="openstack/heat-07f7-account-create-update-k24c7" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.719156 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1826e5b-3563-455f-9caf-9c4ee203210f-operator-scripts\") pod \"heat-07f7-account-create-update-k24c7\" (UID: \"b1826e5b-3563-455f-9caf-9c4ee203210f\") " pod="openstack/heat-07f7-account-create-update-k24c7" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.748276 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5vl8\" (UniqueName: \"kubernetes.io/projected/b1826e5b-3563-455f-9caf-9c4ee203210f-kube-api-access-t5vl8\") pod \"heat-07f7-account-create-update-k24c7\" (UID: \"b1826e5b-3563-455f-9caf-9c4ee203210f\") " pod="openstack/heat-07f7-account-create-update-k24c7" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.811054 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-07f7-account-create-update-k24c7" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.872738 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" event={"ID":"34e3aca5-c7d4-4401-b301-1ab6497cb1d7","Type":"ContainerStarted","Data":"42be2316b4ae343fcb4b814718eabf5f7933e5e7ed598513fca11b7935007ed3"} Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.873820 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.876904 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-9vmb7" event={"ID":"f90d34b6-263e-4515-a13a-a41fda1c40ca","Type":"ContainerStarted","Data":"18930daf07a76a05764aee269daec1f5c915570e7f3862b364d2758a4e346023"} Feb 14 04:30:51 crc kubenswrapper[4867]: I0214 04:30:51.901707 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" podStartSLOduration=2.901686775 podStartE2EDuration="2.901686775s" podCreationTimestamp="2026-02-14 04:30:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:30:51.892429199 +0000 UTC m=+1283.973366523" watchObservedRunningTime="2026-02-14 04:30:51.901686775 +0000 UTC m=+1283.982624089" Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.042898 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-f62v7"] Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.176613 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-gk75z"] Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.337407 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fad3-account-create-update-zwwh5"] Feb 14 04:30:52 crc kubenswrapper[4867]: W0214 04:30:52.338742 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd001336_81f9_43f6_9540_432047e6c98a.slice/crio-3e22e2f973dc94c1eb0c671f62dfabd49a9e62ca29a034bc4605cec2c1c2cb03 WatchSource:0}: Error finding container 3e22e2f973dc94c1eb0c671f62dfabd49a9e62ca29a034bc4605cec2c1c2cb03: Status 404 returned error can't find the container with id 3e22e2f973dc94c1eb0c671f62dfabd49a9e62ca29a034bc4605cec2c1c2cb03 Feb 14 04:30:52 crc kubenswrapper[4867]: W0214 04:30:52.424661 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6961722f_b14d_42f2_bd56_68686c2e8a9a.slice/crio-9e2a66b669523bd8ec4ed03fffefd52b1e590786cf2f25c6600bb3d0803c2a73 WatchSource:0}: Error finding container 9e2a66b669523bd8ec4ed03fffefd52b1e590786cf2f25c6600bb3d0803c2a73: Status 404 returned error can't find the container with id 9e2a66b669523bd8ec4ed03fffefd52b1e590786cf2f25c6600bb3d0803c2a73 Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.429831 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3b6b-account-create-update-74g2s"] Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.480353 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-8zqfs"] Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.615249 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-7kcws"] Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.630484 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bab0-account-create-update-kmfpg"] Feb 14 04:30:52 crc kubenswrapper[4867]: W0214 04:30:52.679443 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1f3a1a1_5734_4782_98e1_1eb22cfbdf93.slice/crio-2fa740a1560381c1be588a63fa6eca1a87525a10489c0023604b28bbb422bcce WatchSource:0}: Error finding container 2fa740a1560381c1be588a63fa6eca1a87525a10489c0023604b28bbb422bcce: Status 404 returned error can't find the container with id 2fa740a1560381c1be588a63fa6eca1a87525a10489c0023604b28bbb422bcce Feb 14 04:30:52 crc kubenswrapper[4867]: W0214 04:30:52.679724 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c5e9025_3781_4461_98d7_0d0d72c3b59b.slice/crio-accc0a0ad0ec78eb75ebe3797d9c1962821c2a1178aef8aa910c8f1b960f9f06 WatchSource:0}: Error finding container accc0a0ad0ec78eb75ebe3797d9c1962821c2a1178aef8aa910c8f1b960f9f06: Status 404 returned error can't find the container with id accc0a0ad0ec78eb75ebe3797d9c1962821c2a1178aef8aa910c8f1b960f9f06 Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.727663 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-07f7-account-create-update-k24c7"] Feb 14 04:30:52 crc kubenswrapper[4867]: W0214 04:30:52.742496 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1826e5b_3563_455f_9caf_9c4ee203210f.slice/crio-e3ca34489f1971b93da8534dbc2ac608bb089f7e2eaca7201880d10011bb4815 WatchSource:0}: Error finding container e3ca34489f1971b93da8534dbc2ac608bb089f7e2eaca7201880d10011bb4815: Status 404 returned error can't find the container with id e3ca34489f1971b93da8534dbc2ac608bb089f7e2eaca7201880d10011bb4815 Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.891540 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-8zqfs" event={"ID":"c14b9ea2-b4ee-4365-8b77-d58ff122fabb","Type":"ContainerStarted","Data":"4ec63f92ddcb6034ab74dfd7e4ce3e903a8d6e48acae9dd2331725f1ae872cc4"} Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.892861 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gk75z" event={"ID":"49af28f1-d33f-4717-81a7-4377bfef388c","Type":"ContainerStarted","Data":"64fb40663b912dd7645436912b2fd2796b557bdb87fefac71729dbd2b250227b"} Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.895320 4867 generic.go:334] "Generic (PLEG): container finished" podID="9c993d62-94a7-4903-b984-adcef36b53b8" containerID="b0ee3d8476bae8f4a3fe8c62bb7c061a9556901f3c45531ad9e5c2cc20102b49" exitCode=0 Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.895372 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-f62v7" event={"ID":"9c993d62-94a7-4903-b984-adcef36b53b8","Type":"ContainerDied","Data":"b0ee3d8476bae8f4a3fe8c62bb7c061a9556901f3c45531ad9e5c2cc20102b49"} Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.895491 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-f62v7" event={"ID":"9c993d62-94a7-4903-b984-adcef36b53b8","Type":"ContainerStarted","Data":"27d170506f928ebc8901447eb428c9fc3a1990f4e11bb4e89c49ea05c8cadca9"} Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.898315 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bab0-account-create-update-kmfpg" event={"ID":"2c5e9025-3781-4461-98d7-0d0d72c3b59b","Type":"ContainerStarted","Data":"accc0a0ad0ec78eb75ebe3797d9c1962821c2a1178aef8aa910c8f1b960f9f06"} Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.907370 4867 generic.go:334] "Generic (PLEG): container finished" podID="f90d34b6-263e-4515-a13a-a41fda1c40ca" containerID="8d4513234d1fad24212cdf82718a385562881173fcd13074ff0a12c06d73e620" exitCode=0 Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.907473 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-9vmb7" event={"ID":"f90d34b6-263e-4515-a13a-a41fda1c40ca","Type":"ContainerDied","Data":"8d4513234d1fad24212cdf82718a385562881173fcd13074ff0a12c06d73e620"} Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.911140 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-07f7-account-create-update-k24c7" event={"ID":"b1826e5b-3563-455f-9caf-9c4ee203210f","Type":"ContainerStarted","Data":"e3ca34489f1971b93da8534dbc2ac608bb089f7e2eaca7201880d10011bb4815"} Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.913697 4867 generic.go:334] "Generic (PLEG): container finished" podID="bd001336-81f9-43f6-9540-432047e6c98a" containerID="e481f6b0c38be3cb0239424de842f33edc585ce836916de0d7d544ab198683d3" exitCode=0 Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.913781 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fad3-account-create-update-zwwh5" event={"ID":"bd001336-81f9-43f6-9540-432047e6c98a","Type":"ContainerDied","Data":"e481f6b0c38be3cb0239424de842f33edc585ce836916de0d7d544ab198683d3"} Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.913827 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fad3-account-create-update-zwwh5" event={"ID":"bd001336-81f9-43f6-9540-432047e6c98a","Type":"ContainerStarted","Data":"3e22e2f973dc94c1eb0c671f62dfabd49a9e62ca29a034bc4605cec2c1c2cb03"} Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.914955 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7kcws" event={"ID":"d1f3a1a1-5734-4782-98e1-1eb22cfbdf93","Type":"ContainerStarted","Data":"2fa740a1560381c1be588a63fa6eca1a87525a10489c0023604b28bbb422bcce"} Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.922085 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3b6b-account-create-update-74g2s" event={"ID":"6961722f-b14d-42f2-bd56-68686c2e8a9a","Type":"ContainerStarted","Data":"dac7c15e8d204db1888f9efc6944db09a4f811e1647c31593e86131c9a51b98c"} Feb 14 04:30:52 crc kubenswrapper[4867]: I0214 04:30:52.922121 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3b6b-account-create-update-74g2s" event={"ID":"6961722f-b14d-42f2-bd56-68686c2e8a9a","Type":"ContainerStarted","Data":"9e2a66b669523bd8ec4ed03fffefd52b1e590786cf2f25c6600bb3d0803c2a73"} Feb 14 04:30:53 crc kubenswrapper[4867]: I0214 04:30:53.005224 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-3b6b-account-create-update-74g2s" podStartSLOduration=2.005199656 podStartE2EDuration="2.005199656s" podCreationTimestamp="2026-02-14 04:30:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:30:52.989410838 +0000 UTC m=+1285.070348152" watchObservedRunningTime="2026-02-14 04:30:53.005199656 +0000 UTC m=+1285.086136970" Feb 14 04:30:53 crc kubenswrapper[4867]: I0214 04:30:53.371009 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:53 crc kubenswrapper[4867]: I0214 04:30:53.388158 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:53 crc kubenswrapper[4867]: E0214 04:30:53.493683 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c5e9025_3781_4461_98d7_0d0d72c3b59b.slice/crio-cb180091e4ae70970aa78bde495475b793634681199f41c69a03b8635b020332.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1826e5b_3563_455f_9caf_9c4ee203210f.slice/crio-4f77da80359dbcaaf7f1b0862edf00e5f51cbdfe953464edb0d8a0f3cd5a1425.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c5e9025_3781_4461_98d7_0d0d72c3b59b.slice/crio-conmon-cb180091e4ae70970aa78bde495475b793634681199f41c69a03b8635b020332.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1826e5b_3563_455f_9caf_9c4ee203210f.slice/crio-conmon-4f77da80359dbcaaf7f1b0862edf00e5f51cbdfe953464edb0d8a0f3cd5a1425.scope\": RecentStats: unable to find data in memory cache]" Feb 14 04:30:53 crc kubenswrapper[4867]: I0214 04:30:53.934789 4867 generic.go:334] "Generic (PLEG): container finished" podID="d1f3a1a1-5734-4782-98e1-1eb22cfbdf93" containerID="bd098d1d3f5431ee4dfc77512f72bdb3c684d719a4f758c6fe63d5e6f0d5b682" exitCode=0 Feb 14 04:30:53 crc kubenswrapper[4867]: I0214 04:30:53.934863 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7kcws" event={"ID":"d1f3a1a1-5734-4782-98e1-1eb22cfbdf93","Type":"ContainerDied","Data":"bd098d1d3f5431ee4dfc77512f72bdb3c684d719a4f758c6fe63d5e6f0d5b682"} Feb 14 04:30:53 crc kubenswrapper[4867]: I0214 04:30:53.937877 4867 generic.go:334] "Generic (PLEG): container finished" podID="6961722f-b14d-42f2-bd56-68686c2e8a9a" containerID="dac7c15e8d204db1888f9efc6944db09a4f811e1647c31593e86131c9a51b98c" exitCode=0 Feb 14 04:30:53 crc kubenswrapper[4867]: I0214 04:30:53.937913 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3b6b-account-create-update-74g2s" event={"ID":"6961722f-b14d-42f2-bd56-68686c2e8a9a","Type":"ContainerDied","Data":"dac7c15e8d204db1888f9efc6944db09a4f811e1647c31593e86131c9a51b98c"} Feb 14 04:30:53 crc kubenswrapper[4867]: I0214 04:30:53.941546 4867 generic.go:334] "Generic (PLEG): container finished" podID="c14b9ea2-b4ee-4365-8b77-d58ff122fabb" containerID="8042db461fd6eabaa93681751cc5037c8a7ddd74046cd943405dc18cc37f069c" exitCode=0 Feb 14 04:30:53 crc kubenswrapper[4867]: I0214 04:30:53.941649 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-8zqfs" event={"ID":"c14b9ea2-b4ee-4365-8b77-d58ff122fabb","Type":"ContainerDied","Data":"8042db461fd6eabaa93681751cc5037c8a7ddd74046cd943405dc18cc37f069c"} Feb 14 04:30:53 crc kubenswrapper[4867]: I0214 04:30:53.943831 4867 generic.go:334] "Generic (PLEG): container finished" podID="b1826e5b-3563-455f-9caf-9c4ee203210f" containerID="4f77da80359dbcaaf7f1b0862edf00e5f51cbdfe953464edb0d8a0f3cd5a1425" exitCode=0 Feb 14 04:30:53 crc kubenswrapper[4867]: I0214 04:30:53.943882 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-07f7-account-create-update-k24c7" event={"ID":"b1826e5b-3563-455f-9caf-9c4ee203210f","Type":"ContainerDied","Data":"4f77da80359dbcaaf7f1b0862edf00e5f51cbdfe953464edb0d8a0f3cd5a1425"} Feb 14 04:30:53 crc kubenswrapper[4867]: I0214 04:30:53.945687 4867 generic.go:334] "Generic (PLEG): container finished" podID="2c5e9025-3781-4461-98d7-0d0d72c3b59b" containerID="cb180091e4ae70970aa78bde495475b793634681199f41c69a03b8635b020332" exitCode=0 Feb 14 04:30:53 crc kubenswrapper[4867]: I0214 04:30:53.945758 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bab0-account-create-update-kmfpg" event={"ID":"2c5e9025-3781-4461-98d7-0d0d72c3b59b","Type":"ContainerDied","Data":"cb180091e4ae70970aa78bde495475b793634681199f41c69a03b8635b020332"} Feb 14 04:30:53 crc kubenswrapper[4867]: I0214 04:30:53.954856 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.500953 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fad3-account-create-update-zwwh5" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.595014 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd001336-81f9-43f6-9540-432047e6c98a-operator-scripts\") pod \"bd001336-81f9-43f6-9540-432047e6c98a\" (UID: \"bd001336-81f9-43f6-9540-432047e6c98a\") " Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.595103 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx5sc\" (UniqueName: \"kubernetes.io/projected/bd001336-81f9-43f6-9540-432047e6c98a-kube-api-access-nx5sc\") pod \"bd001336-81f9-43f6-9540-432047e6c98a\" (UID: \"bd001336-81f9-43f6-9540-432047e6c98a\") " Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.595808 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd001336-81f9-43f6-9540-432047e6c98a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bd001336-81f9-43f6-9540-432047e6c98a" (UID: "bd001336-81f9-43f6-9540-432047e6c98a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.595956 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd001336-81f9-43f6-9540-432047e6c98a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.602281 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd001336-81f9-43f6-9540-432047e6c98a-kube-api-access-nx5sc" (OuterVolumeSpecName: "kube-api-access-nx5sc") pod "bd001336-81f9-43f6-9540-432047e6c98a" (UID: "bd001336-81f9-43f6-9540-432047e6c98a"). InnerVolumeSpecName "kube-api-access-nx5sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.680784 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9vmb7" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.686919 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-f62v7" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.698892 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx5sc\" (UniqueName: \"kubernetes.io/projected/bd001336-81f9-43f6-9540-432047e6c98a-kube-api-access-nx5sc\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.801822 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c993d62-94a7-4903-b984-adcef36b53b8-operator-scripts\") pod \"9c993d62-94a7-4903-b984-adcef36b53b8\" (UID: \"9c993d62-94a7-4903-b984-adcef36b53b8\") " Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.802807 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n9np\" (UniqueName: \"kubernetes.io/projected/f90d34b6-263e-4515-a13a-a41fda1c40ca-kube-api-access-5n9np\") pod \"f90d34b6-263e-4515-a13a-a41fda1c40ca\" (UID: \"f90d34b6-263e-4515-a13a-a41fda1c40ca\") " Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.802914 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhjfs\" (UniqueName: \"kubernetes.io/projected/9c993d62-94a7-4903-b984-adcef36b53b8-kube-api-access-fhjfs\") pod \"9c993d62-94a7-4903-b984-adcef36b53b8\" (UID: \"9c993d62-94a7-4903-b984-adcef36b53b8\") " Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.803146 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f90d34b6-263e-4515-a13a-a41fda1c40ca-operator-scripts\") pod \"f90d34b6-263e-4515-a13a-a41fda1c40ca\" (UID: \"f90d34b6-263e-4515-a13a-a41fda1c40ca\") " Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.804436 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f90d34b6-263e-4515-a13a-a41fda1c40ca-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f90d34b6-263e-4515-a13a-a41fda1c40ca" (UID: "f90d34b6-263e-4515-a13a-a41fda1c40ca"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.804979 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c993d62-94a7-4903-b984-adcef36b53b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9c993d62-94a7-4903-b984-adcef36b53b8" (UID: "9c993d62-94a7-4903-b984-adcef36b53b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.809610 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c993d62-94a7-4903-b984-adcef36b53b8-kube-api-access-fhjfs" (OuterVolumeSpecName: "kube-api-access-fhjfs") pod "9c993d62-94a7-4903-b984-adcef36b53b8" (UID: "9c993d62-94a7-4903-b984-adcef36b53b8"). InnerVolumeSpecName "kube-api-access-fhjfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.812485 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f90d34b6-263e-4515-a13a-a41fda1c40ca-kube-api-access-5n9np" (OuterVolumeSpecName: "kube-api-access-5n9np") pod "f90d34b6-263e-4515-a13a-a41fda1c40ca" (UID: "f90d34b6-263e-4515-a13a-a41fda1c40ca"). InnerVolumeSpecName "kube-api-access-5n9np". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.906444 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c993d62-94a7-4903-b984-adcef36b53b8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.906485 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n9np\" (UniqueName: \"kubernetes.io/projected/f90d34b6-263e-4515-a13a-a41fda1c40ca-kube-api-access-5n9np\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.906497 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhjfs\" (UniqueName: \"kubernetes.io/projected/9c993d62-94a7-4903-b984-adcef36b53b8-kube-api-access-fhjfs\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.906522 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f90d34b6-263e-4515-a13a-a41fda1c40ca-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.962047 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fad3-account-create-update-zwwh5" event={"ID":"bd001336-81f9-43f6-9540-432047e6c98a","Type":"ContainerDied","Data":"3e22e2f973dc94c1eb0c671f62dfabd49a9e62ca29a034bc4605cec2c1c2cb03"} Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.962124 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e22e2f973dc94c1eb0c671f62dfabd49a9e62ca29a034bc4605cec2c1c2cb03" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.962077 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fad3-account-create-update-zwwh5" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.967452 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-f62v7" event={"ID":"9c993d62-94a7-4903-b984-adcef36b53b8","Type":"ContainerDied","Data":"27d170506f928ebc8901447eb428c9fc3a1990f4e11bb4e89c49ea05c8cadca9"} Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.967563 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27d170506f928ebc8901447eb428c9fc3a1990f4e11bb4e89c49ea05c8cadca9" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.968452 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-f62v7" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.972149 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-9vmb7" event={"ID":"f90d34b6-263e-4515-a13a-a41fda1c40ca","Type":"ContainerDied","Data":"18930daf07a76a05764aee269daec1f5c915570e7f3862b364d2758a4e346023"} Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.972282 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18930daf07a76a05764aee269daec1f5c915570e7f3862b364d2758a4e346023" Feb 14 04:30:54 crc kubenswrapper[4867]: I0214 04:30:54.972376 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9vmb7" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.007749 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-07f7-account-create-update-k24c7" event={"ID":"b1826e5b-3563-455f-9caf-9c4ee203210f","Type":"ContainerDied","Data":"e3ca34489f1971b93da8534dbc2ac608bb089f7e2eaca7201880d10011bb4815"} Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.008257 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3ca34489f1971b93da8534dbc2ac608bb089f7e2eaca7201880d10011bb4815" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.011721 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bab0-account-create-update-kmfpg" event={"ID":"2c5e9025-3781-4461-98d7-0d0d72c3b59b","Type":"ContainerDied","Data":"accc0a0ad0ec78eb75ebe3797d9c1962821c2a1178aef8aa910c8f1b960f9f06"} Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.011749 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="accc0a0ad0ec78eb75ebe3797d9c1962821c2a1178aef8aa910c8f1b960f9f06" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.013172 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7kcws" event={"ID":"d1f3a1a1-5734-4782-98e1-1eb22cfbdf93","Type":"ContainerDied","Data":"2fa740a1560381c1be588a63fa6eca1a87525a10489c0023604b28bbb422bcce"} Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.013195 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fa740a1560381c1be588a63fa6eca1a87525a10489c0023604b28bbb422bcce" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.014191 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3b6b-account-create-update-74g2s" event={"ID":"6961722f-b14d-42f2-bd56-68686c2e8a9a","Type":"ContainerDied","Data":"9e2a66b669523bd8ec4ed03fffefd52b1e590786cf2f25c6600bb3d0803c2a73"} Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.014212 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e2a66b669523bd8ec4ed03fffefd52b1e590786cf2f25c6600bb3d0803c2a73" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.015472 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-8zqfs" event={"ID":"c14b9ea2-b4ee-4365-8b77-d58ff122fabb","Type":"ContainerDied","Data":"4ec63f92ddcb6034ab74dfd7e4ce3e903a8d6e48acae9dd2331725f1ae872cc4"} Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.015498 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ec63f92ddcb6034ab74dfd7e4ce3e903a8d6e48acae9dd2331725f1ae872cc4" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.094679 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-07f7-account-create-update-k24c7" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.109985 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8zqfs" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.120279 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3b6b-account-create-update-74g2s" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.167697 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7kcws" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.178880 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bab0-account-create-update-kmfpg" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.189308 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5vl8\" (UniqueName: \"kubernetes.io/projected/b1826e5b-3563-455f-9caf-9c4ee203210f-kube-api-access-t5vl8\") pod \"b1826e5b-3563-455f-9caf-9c4ee203210f\" (UID: \"b1826e5b-3563-455f-9caf-9c4ee203210f\") " Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.189455 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrw7v\" (UniqueName: \"kubernetes.io/projected/c14b9ea2-b4ee-4365-8b77-d58ff122fabb-kube-api-access-zrw7v\") pod \"c14b9ea2-b4ee-4365-8b77-d58ff122fabb\" (UID: \"c14b9ea2-b4ee-4365-8b77-d58ff122fabb\") " Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.189542 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1826e5b-3563-455f-9caf-9c4ee203210f-operator-scripts\") pod \"b1826e5b-3563-455f-9caf-9c4ee203210f\" (UID: \"b1826e5b-3563-455f-9caf-9c4ee203210f\") " Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.189597 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6961722f-b14d-42f2-bd56-68686c2e8a9a-operator-scripts\") pod \"6961722f-b14d-42f2-bd56-68686c2e8a9a\" (UID: \"6961722f-b14d-42f2-bd56-68686c2e8a9a\") " Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.189662 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jlgg\" (UniqueName: \"kubernetes.io/projected/6961722f-b14d-42f2-bd56-68686c2e8a9a-kube-api-access-7jlgg\") pod \"6961722f-b14d-42f2-bd56-68686c2e8a9a\" (UID: \"6961722f-b14d-42f2-bd56-68686c2e8a9a\") " Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.189685 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c14b9ea2-b4ee-4365-8b77-d58ff122fabb-operator-scripts\") pod \"c14b9ea2-b4ee-4365-8b77-d58ff122fabb\" (UID: \"c14b9ea2-b4ee-4365-8b77-d58ff122fabb\") " Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.190836 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c14b9ea2-b4ee-4365-8b77-d58ff122fabb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c14b9ea2-b4ee-4365-8b77-d58ff122fabb" (UID: "c14b9ea2-b4ee-4365-8b77-d58ff122fabb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.190836 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1826e5b-3563-455f-9caf-9c4ee203210f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b1826e5b-3563-455f-9caf-9c4ee203210f" (UID: "b1826e5b-3563-455f-9caf-9c4ee203210f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.191546 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6961722f-b14d-42f2-bd56-68686c2e8a9a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6961722f-b14d-42f2-bd56-68686c2e8a9a" (UID: "6961722f-b14d-42f2-bd56-68686c2e8a9a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.193242 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1826e5b-3563-455f-9caf-9c4ee203210f-kube-api-access-t5vl8" (OuterVolumeSpecName: "kube-api-access-t5vl8") pod "b1826e5b-3563-455f-9caf-9c4ee203210f" (UID: "b1826e5b-3563-455f-9caf-9c4ee203210f"). InnerVolumeSpecName "kube-api-access-t5vl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.194965 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6961722f-b14d-42f2-bd56-68686c2e8a9a-kube-api-access-7jlgg" (OuterVolumeSpecName: "kube-api-access-7jlgg") pod "6961722f-b14d-42f2-bd56-68686c2e8a9a" (UID: "6961722f-b14d-42f2-bd56-68686c2e8a9a"). InnerVolumeSpecName "kube-api-access-7jlgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.202330 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c14b9ea2-b4ee-4365-8b77-d58ff122fabb-kube-api-access-zrw7v" (OuterVolumeSpecName: "kube-api-access-zrw7v") pod "c14b9ea2-b4ee-4365-8b77-d58ff122fabb" (UID: "c14b9ea2-b4ee-4365-8b77-d58ff122fabb"). InnerVolumeSpecName "kube-api-access-zrw7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.291216 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7srfx\" (UniqueName: \"kubernetes.io/projected/d1f3a1a1-5734-4782-98e1-1eb22cfbdf93-kube-api-access-7srfx\") pod \"d1f3a1a1-5734-4782-98e1-1eb22cfbdf93\" (UID: \"d1f3a1a1-5734-4782-98e1-1eb22cfbdf93\") " Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.291299 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c5e9025-3781-4461-98d7-0d0d72c3b59b-operator-scripts\") pod \"2c5e9025-3781-4461-98d7-0d0d72c3b59b\" (UID: \"2c5e9025-3781-4461-98d7-0d0d72c3b59b\") " Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.291403 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz4mb\" (UniqueName: \"kubernetes.io/projected/2c5e9025-3781-4461-98d7-0d0d72c3b59b-kube-api-access-sz4mb\") pod \"2c5e9025-3781-4461-98d7-0d0d72c3b59b\" (UID: \"2c5e9025-3781-4461-98d7-0d0d72c3b59b\") " Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.291477 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1f3a1a1-5734-4782-98e1-1eb22cfbdf93-operator-scripts\") pod \"d1f3a1a1-5734-4782-98e1-1eb22cfbdf93\" (UID: \"d1f3a1a1-5734-4782-98e1-1eb22cfbdf93\") " Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.292149 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5vl8\" (UniqueName: \"kubernetes.io/projected/b1826e5b-3563-455f-9caf-9c4ee203210f-kube-api-access-t5vl8\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.292169 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrw7v\" (UniqueName: \"kubernetes.io/projected/c14b9ea2-b4ee-4365-8b77-d58ff122fabb-kube-api-access-zrw7v\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.292181 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1826e5b-3563-455f-9caf-9c4ee203210f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.292193 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6961722f-b14d-42f2-bd56-68686c2e8a9a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.292201 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jlgg\" (UniqueName: \"kubernetes.io/projected/6961722f-b14d-42f2-bd56-68686c2e8a9a-kube-api-access-7jlgg\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.292209 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c14b9ea2-b4ee-4365-8b77-d58ff122fabb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.292599 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1f3a1a1-5734-4782-98e1-1eb22cfbdf93-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d1f3a1a1-5734-4782-98e1-1eb22cfbdf93" (UID: "d1f3a1a1-5734-4782-98e1-1eb22cfbdf93"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.293433 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c5e9025-3781-4461-98d7-0d0d72c3b59b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2c5e9025-3781-4461-98d7-0d0d72c3b59b" (UID: "2c5e9025-3781-4461-98d7-0d0d72c3b59b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.296731 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c5e9025-3781-4461-98d7-0d0d72c3b59b-kube-api-access-sz4mb" (OuterVolumeSpecName: "kube-api-access-sz4mb") pod "2c5e9025-3781-4461-98d7-0d0d72c3b59b" (UID: "2c5e9025-3781-4461-98d7-0d0d72c3b59b"). InnerVolumeSpecName "kube-api-access-sz4mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.296898 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1f3a1a1-5734-4782-98e1-1eb22cfbdf93-kube-api-access-7srfx" (OuterVolumeSpecName: "kube-api-access-7srfx") pod "d1f3a1a1-5734-4782-98e1-1eb22cfbdf93" (UID: "d1f3a1a1-5734-4782-98e1-1eb22cfbdf93"). InnerVolumeSpecName "kube-api-access-7srfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.394524 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7srfx\" (UniqueName: \"kubernetes.io/projected/d1f3a1a1-5734-4782-98e1-1eb22cfbdf93-kube-api-access-7srfx\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.395132 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c5e9025-3781-4461-98d7-0d0d72c3b59b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.395146 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz4mb\" (UniqueName: \"kubernetes.io/projected/2c5e9025-3781-4461-98d7-0d0d72c3b59b-kube-api-access-sz4mb\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:58 crc kubenswrapper[4867]: I0214 04:30:58.395155 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d1f3a1a1-5734-4782-98e1-1eb22cfbdf93-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:30:59 crc kubenswrapper[4867]: I0214 04:30:59.037757 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7kcws" Feb 14 04:30:59 crc kubenswrapper[4867]: I0214 04:30:59.037757 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bab0-account-create-update-kmfpg" Feb 14 04:30:59 crc kubenswrapper[4867]: I0214 04:30:59.037786 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8zqfs" Feb 14 04:30:59 crc kubenswrapper[4867]: I0214 04:30:59.037796 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-07f7-account-create-update-k24c7" Feb 14 04:30:59 crc kubenswrapper[4867]: I0214 04:30:59.037811 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gk75z" event={"ID":"49af28f1-d33f-4717-81a7-4377bfef388c","Type":"ContainerStarted","Data":"abb5bce0228ffe2b4f577c72d541587bc9ccc14c780b4813bbfbccab7bd48336"} Feb 14 04:30:59 crc kubenswrapper[4867]: I0214 04:30:59.039468 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3b6b-account-create-update-74g2s" Feb 14 04:30:59 crc kubenswrapper[4867]: I0214 04:30:59.061294 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-gk75z" podStartSLOduration=2.348928085 podStartE2EDuration="8.061247314s" podCreationTimestamp="2026-02-14 04:30:51 +0000 UTC" firstStartedPulling="2026-02-14 04:30:52.201741156 +0000 UTC m=+1284.282678470" lastFinishedPulling="2026-02-14 04:30:57.914060385 +0000 UTC m=+1289.994997699" observedRunningTime="2026-02-14 04:30:59.056942529 +0000 UTC m=+1291.137879853" watchObservedRunningTime="2026-02-14 04:30:59.061247314 +0000 UTC m=+1291.142184628" Feb 14 04:30:59 crc kubenswrapper[4867]: I0214 04:30:59.586712 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:30:59 crc kubenswrapper[4867]: I0214 04:30:59.688536 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-sp44n"] Feb 14 04:30:59 crc kubenswrapper[4867]: I0214 04:30:59.688792 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-sp44n" podUID="e2d457dc-19b4-4279-8c97-930f91291f98" containerName="dnsmasq-dns" containerID="cri-o://cfefeb2b897af2fb3d5d274167a23f6d2bce6f0ba7bf17c5af7d0be9357e047c" gracePeriod=10 Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.061937 4867 generic.go:334] "Generic (PLEG): container finished" podID="e2d457dc-19b4-4279-8c97-930f91291f98" containerID="cfefeb2b897af2fb3d5d274167a23f6d2bce6f0ba7bf17c5af7d0be9357e047c" exitCode=0 Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.062119 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-sp44n" event={"ID":"e2d457dc-19b4-4279-8c97-930f91291f98","Type":"ContainerDied","Data":"cfefeb2b897af2fb3d5d274167a23f6d2bce6f0ba7bf17c5af7d0be9357e047c"} Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.326836 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.338858 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh2g2\" (UniqueName: \"kubernetes.io/projected/e2d457dc-19b4-4279-8c97-930f91291f98-kube-api-access-xh2g2\") pod \"e2d457dc-19b4-4279-8c97-930f91291f98\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.339239 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-ovsdbserver-nb\") pod \"e2d457dc-19b4-4279-8c97-930f91291f98\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.339295 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-dns-svc\") pod \"e2d457dc-19b4-4279-8c97-930f91291f98\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.339320 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-config\") pod \"e2d457dc-19b4-4279-8c97-930f91291f98\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.339355 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-dns-swift-storage-0\") pod \"e2d457dc-19b4-4279-8c97-930f91291f98\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.339446 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-ovsdbserver-sb\") pod \"e2d457dc-19b4-4279-8c97-930f91291f98\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.348798 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2d457dc-19b4-4279-8c97-930f91291f98-kube-api-access-xh2g2" (OuterVolumeSpecName: "kube-api-access-xh2g2") pod "e2d457dc-19b4-4279-8c97-930f91291f98" (UID: "e2d457dc-19b4-4279-8c97-930f91291f98"). InnerVolumeSpecName "kube-api-access-xh2g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.410079 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e2d457dc-19b4-4279-8c97-930f91291f98" (UID: "e2d457dc-19b4-4279-8c97-930f91291f98"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.423684 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e2d457dc-19b4-4279-8c97-930f91291f98" (UID: "e2d457dc-19b4-4279-8c97-930f91291f98"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.426581 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e2d457dc-19b4-4279-8c97-930f91291f98" (UID: "e2d457dc-19b4-4279-8c97-930f91291f98"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.431815 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-config" (OuterVolumeSpecName: "config") pod "e2d457dc-19b4-4279-8c97-930f91291f98" (UID: "e2d457dc-19b4-4279-8c97-930f91291f98"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.441975 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e2d457dc-19b4-4279-8c97-930f91291f98" (UID: "e2d457dc-19b4-4279-8c97-930f91291f98"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.446340 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-dns-swift-storage-0\") pod \"e2d457dc-19b4-4279-8c97-930f91291f98\" (UID: \"e2d457dc-19b4-4279-8c97-930f91291f98\") " Feb 14 04:31:00 crc kubenswrapper[4867]: W0214 04:31:00.446693 4867 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/e2d457dc-19b4-4279-8c97-930f91291f98/volumes/kubernetes.io~configmap/dns-swift-storage-0 Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.446886 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e2d457dc-19b4-4279-8c97-930f91291f98" (UID: "e2d457dc-19b4-4279-8c97-930f91291f98"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.449083 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.449125 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.449139 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh2g2\" (UniqueName: \"kubernetes.io/projected/e2d457dc-19b4-4279-8c97-930f91291f98-kube-api-access-xh2g2\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.449156 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.449173 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:00 crc kubenswrapper[4867]: I0214 04:31:00.449186 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2d457dc-19b4-4279-8c97-930f91291f98-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:01 crc kubenswrapper[4867]: I0214 04:31:01.073534 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-sp44n" event={"ID":"e2d457dc-19b4-4279-8c97-930f91291f98","Type":"ContainerDied","Data":"3bb4499423a21fd6e6abed1bb4c19b4b9bfd321a8e7779e3689cb78809defb85"} Feb 14 04:31:01 crc kubenswrapper[4867]: I0214 04:31:01.073604 4867 scope.go:117] "RemoveContainer" containerID="cfefeb2b897af2fb3d5d274167a23f6d2bce6f0ba7bf17c5af7d0be9357e047c" Feb 14 04:31:01 crc kubenswrapper[4867]: I0214 04:31:01.073685 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-sp44n" Feb 14 04:31:01 crc kubenswrapper[4867]: I0214 04:31:01.100984 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-sp44n"] Feb 14 04:31:01 crc kubenswrapper[4867]: I0214 04:31:01.101308 4867 scope.go:117] "RemoveContainer" containerID="3ce430069186ce26ff0516293d97e3eab6ca721fa6eae3b7d027a605885cee6e" Feb 14 04:31:01 crc kubenswrapper[4867]: I0214 04:31:01.112729 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-sp44n"] Feb 14 04:31:02 crc kubenswrapper[4867]: I0214 04:31:02.088080 4867 generic.go:334] "Generic (PLEG): container finished" podID="49af28f1-d33f-4717-81a7-4377bfef388c" containerID="abb5bce0228ffe2b4f577c72d541587bc9ccc14c780b4813bbfbccab7bd48336" exitCode=0 Feb 14 04:31:02 crc kubenswrapper[4867]: I0214 04:31:02.088132 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gk75z" event={"ID":"49af28f1-d33f-4717-81a7-4377bfef388c","Type":"ContainerDied","Data":"abb5bce0228ffe2b4f577c72d541587bc9ccc14c780b4813bbfbccab7bd48336"} Feb 14 04:31:03 crc kubenswrapper[4867]: I0214 04:31:03.014600 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2d457dc-19b4-4279-8c97-930f91291f98" path="/var/lib/kubelet/pods/e2d457dc-19b4-4279-8c97-930f91291f98/volumes" Feb 14 04:31:03 crc kubenswrapper[4867]: I0214 04:31:03.522425 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gk75z" Feb 14 04:31:03 crc kubenswrapper[4867]: I0214 04:31:03.613688 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49af28f1-d33f-4717-81a7-4377bfef388c-combined-ca-bundle\") pod \"49af28f1-d33f-4717-81a7-4377bfef388c\" (UID: \"49af28f1-d33f-4717-81a7-4377bfef388c\") " Feb 14 04:31:03 crc kubenswrapper[4867]: I0214 04:31:03.614737 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9gd9\" (UniqueName: \"kubernetes.io/projected/49af28f1-d33f-4717-81a7-4377bfef388c-kube-api-access-j9gd9\") pod \"49af28f1-d33f-4717-81a7-4377bfef388c\" (UID: \"49af28f1-d33f-4717-81a7-4377bfef388c\") " Feb 14 04:31:03 crc kubenswrapper[4867]: I0214 04:31:03.614964 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49af28f1-d33f-4717-81a7-4377bfef388c-config-data\") pod \"49af28f1-d33f-4717-81a7-4377bfef388c\" (UID: \"49af28f1-d33f-4717-81a7-4377bfef388c\") " Feb 14 04:31:03 crc kubenswrapper[4867]: I0214 04:31:03.623612 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49af28f1-d33f-4717-81a7-4377bfef388c-kube-api-access-j9gd9" (OuterVolumeSpecName: "kube-api-access-j9gd9") pod "49af28f1-d33f-4717-81a7-4377bfef388c" (UID: "49af28f1-d33f-4717-81a7-4377bfef388c"). InnerVolumeSpecName "kube-api-access-j9gd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:31:03 crc kubenswrapper[4867]: I0214 04:31:03.655886 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49af28f1-d33f-4717-81a7-4377bfef388c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49af28f1-d33f-4717-81a7-4377bfef388c" (UID: "49af28f1-d33f-4717-81a7-4377bfef388c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:03 crc kubenswrapper[4867]: I0214 04:31:03.675563 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49af28f1-d33f-4717-81a7-4377bfef388c-config-data" (OuterVolumeSpecName: "config-data") pod "49af28f1-d33f-4717-81a7-4377bfef388c" (UID: "49af28f1-d33f-4717-81a7-4377bfef388c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:03 crc kubenswrapper[4867]: I0214 04:31:03.718549 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49af28f1-d33f-4717-81a7-4377bfef388c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:03 crc kubenswrapper[4867]: I0214 04:31:03.718596 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9gd9\" (UniqueName: \"kubernetes.io/projected/49af28f1-d33f-4717-81a7-4377bfef388c-kube-api-access-j9gd9\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:03 crc kubenswrapper[4867]: I0214 04:31:03.718616 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49af28f1-d33f-4717-81a7-4377bfef388c-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.107865 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gk75z" event={"ID":"49af28f1-d33f-4717-81a7-4377bfef388c","Type":"ContainerDied","Data":"64fb40663b912dd7645436912b2fd2796b557bdb87fefac71729dbd2b250227b"} Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.107915 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64fb40663b912dd7645436912b2fd2796b557bdb87fefac71729dbd2b250227b" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.107981 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gk75z" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.381084 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-l4ptr"] Feb 14 04:31:04 crc kubenswrapper[4867]: E0214 04:31:04.382389 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c14b9ea2-b4ee-4365-8b77-d58ff122fabb" containerName="mariadb-database-create" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382415 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c14b9ea2-b4ee-4365-8b77-d58ff122fabb" containerName="mariadb-database-create" Feb 14 04:31:04 crc kubenswrapper[4867]: E0214 04:31:04.382424 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6961722f-b14d-42f2-bd56-68686c2e8a9a" containerName="mariadb-account-create-update" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382431 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6961722f-b14d-42f2-bd56-68686c2e8a9a" containerName="mariadb-account-create-update" Feb 14 04:31:04 crc kubenswrapper[4867]: E0214 04:31:04.382445 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49af28f1-d33f-4717-81a7-4377bfef388c" containerName="keystone-db-sync" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382452 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="49af28f1-d33f-4717-81a7-4377bfef388c" containerName="keystone-db-sync" Feb 14 04:31:04 crc kubenswrapper[4867]: E0214 04:31:04.382467 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1f3a1a1-5734-4782-98e1-1eb22cfbdf93" containerName="mariadb-database-create" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382472 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1f3a1a1-5734-4782-98e1-1eb22cfbdf93" containerName="mariadb-database-create" Feb 14 04:31:04 crc kubenswrapper[4867]: E0214 04:31:04.382485 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd001336-81f9-43f6-9540-432047e6c98a" containerName="mariadb-account-create-update" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382492 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd001336-81f9-43f6-9540-432047e6c98a" containerName="mariadb-account-create-update" Feb 14 04:31:04 crc kubenswrapper[4867]: E0214 04:31:04.382520 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d457dc-19b4-4279-8c97-930f91291f98" containerName="dnsmasq-dns" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382526 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d457dc-19b4-4279-8c97-930f91291f98" containerName="dnsmasq-dns" Feb 14 04:31:04 crc kubenswrapper[4867]: E0214 04:31:04.382545 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c5e9025-3781-4461-98d7-0d0d72c3b59b" containerName="mariadb-account-create-update" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382551 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c5e9025-3781-4461-98d7-0d0d72c3b59b" containerName="mariadb-account-create-update" Feb 14 04:31:04 crc kubenswrapper[4867]: E0214 04:31:04.382562 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d457dc-19b4-4279-8c97-930f91291f98" containerName="init" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382568 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d457dc-19b4-4279-8c97-930f91291f98" containerName="init" Feb 14 04:31:04 crc kubenswrapper[4867]: E0214 04:31:04.382577 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c993d62-94a7-4903-b984-adcef36b53b8" containerName="mariadb-database-create" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382583 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c993d62-94a7-4903-b984-adcef36b53b8" containerName="mariadb-database-create" Feb 14 04:31:04 crc kubenswrapper[4867]: E0214 04:31:04.382591 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f90d34b6-263e-4515-a13a-a41fda1c40ca" containerName="mariadb-database-create" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382616 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f90d34b6-263e-4515-a13a-a41fda1c40ca" containerName="mariadb-database-create" Feb 14 04:31:04 crc kubenswrapper[4867]: E0214 04:31:04.382627 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1826e5b-3563-455f-9caf-9c4ee203210f" containerName="mariadb-account-create-update" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382635 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1826e5b-3563-455f-9caf-9c4ee203210f" containerName="mariadb-account-create-update" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382834 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c993d62-94a7-4903-b984-adcef36b53b8" containerName="mariadb-database-create" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382847 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c14b9ea2-b4ee-4365-8b77-d58ff122fabb" containerName="mariadb-database-create" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382855 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="49af28f1-d33f-4717-81a7-4377bfef388c" containerName="keystone-db-sync" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382867 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c5e9025-3781-4461-98d7-0d0d72c3b59b" containerName="mariadb-account-create-update" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382880 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d457dc-19b4-4279-8c97-930f91291f98" containerName="dnsmasq-dns" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382894 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1826e5b-3563-455f-9caf-9c4ee203210f" containerName="mariadb-account-create-update" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382903 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f90d34b6-263e-4515-a13a-a41fda1c40ca" containerName="mariadb-database-create" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382912 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd001336-81f9-43f6-9540-432047e6c98a" containerName="mariadb-account-create-update" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382927 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1f3a1a1-5734-4782-98e1-1eb22cfbdf93" containerName="mariadb-database-create" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.382938 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6961722f-b14d-42f2-bd56-68686c2e8a9a" containerName="mariadb-account-create-update" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.384664 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.404173 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-l4ptr"] Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.427688 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-mvxwt"] Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.429159 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.432076 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.432117 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.434223 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.434441 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ffvbq" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.434638 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.437562 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mvxwt"] Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.452563 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-config\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.452618 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.452694 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-combined-ca-bundle\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.452722 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.452742 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-credential-keys\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.452759 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-fernet-keys\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.452784 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-scripts\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.452800 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg9qz\" (UniqueName: \"kubernetes.io/projected/c94481eb-b5a1-40d6-86ea-623f39b63b92-kube-api-access-xg9qz\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.452835 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-config-data\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.452869 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-dns-svc\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.452919 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.452947 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp48t\" (UniqueName: \"kubernetes.io/projected/2b19d645-1c0b-4b85-a052-d90851f5f063-kube-api-access-jp48t\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.555229 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-scripts\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.555274 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg9qz\" (UniqueName: \"kubernetes.io/projected/c94481eb-b5a1-40d6-86ea-623f39b63b92-kube-api-access-xg9qz\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.555319 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-config-data\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.555372 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-dns-svc\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.555430 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.555459 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp48t\" (UniqueName: \"kubernetes.io/projected/2b19d645-1c0b-4b85-a052-d90851f5f063-kube-api-access-jp48t\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.555489 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-config\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.555549 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.555611 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-combined-ca-bundle\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.555701 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.555718 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-credential-keys\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.555734 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-fernet-keys\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.556433 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.557082 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-dns-svc\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.557793 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.558279 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.558476 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-config\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.563409 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-combined-ca-bundle\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.569601 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-config-data\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.570637 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-246z7"] Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.572255 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-246z7" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.585920 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.586039 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-pzjfh" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.587380 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg9qz\" (UniqueName: \"kubernetes.io/projected/c94481eb-b5a1-40d6-86ea-623f39b63b92-kube-api-access-xg9qz\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.598823 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-scripts\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.599247 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-246z7"] Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.600164 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp48t\" (UniqueName: \"kubernetes.io/projected/2b19d645-1c0b-4b85-a052-d90851f5f063-kube-api-access-jp48t\") pod \"dnsmasq-dns-847c4cc679-l4ptr\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.604031 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-credential-keys\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.615385 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-fernet-keys\") pod \"keystone-bootstrap-mvxwt\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.658646 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8hsn\" (UniqueName: \"kubernetes.io/projected/18fb2b12-f922-4976-8e05-6e78a8751456-kube-api-access-r8hsn\") pod \"heat-db-sync-246z7\" (UID: \"18fb2b12-f922-4976-8e05-6e78a8751456\") " pod="openstack/heat-db-sync-246z7" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.658738 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18fb2b12-f922-4976-8e05-6e78a8751456-combined-ca-bundle\") pod \"heat-db-sync-246z7\" (UID: \"18fb2b12-f922-4976-8e05-6e78a8751456\") " pod="openstack/heat-db-sync-246z7" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.667651 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18fb2b12-f922-4976-8e05-6e78a8751456-config-data\") pod \"heat-db-sync-246z7\" (UID: \"18fb2b12-f922-4976-8e05-6e78a8751456\") " pod="openstack/heat-db-sync-246z7" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.702746 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.753944 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.755166 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-grkqh"] Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.768882 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.772373 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18fb2b12-f922-4976-8e05-6e78a8751456-config-data\") pod \"heat-db-sync-246z7\" (UID: \"18fb2b12-f922-4976-8e05-6e78a8751456\") " pod="openstack/heat-db-sync-246z7" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.772479 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8hsn\" (UniqueName: \"kubernetes.io/projected/18fb2b12-f922-4976-8e05-6e78a8751456-kube-api-access-r8hsn\") pod \"heat-db-sync-246z7\" (UID: \"18fb2b12-f922-4976-8e05-6e78a8751456\") " pod="openstack/heat-db-sync-246z7" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.772541 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18fb2b12-f922-4976-8e05-6e78a8751456-combined-ca-bundle\") pod \"heat-db-sync-246z7\" (UID: \"18fb2b12-f922-4976-8e05-6e78a8751456\") " pod="openstack/heat-db-sync-246z7" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.779304 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.779800 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.780587 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-76c2m" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.783698 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18fb2b12-f922-4976-8e05-6e78a8751456-config-data\") pod \"heat-db-sync-246z7\" (UID: \"18fb2b12-f922-4976-8e05-6e78a8751456\") " pod="openstack/heat-db-sync-246z7" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.818199 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-grkqh"] Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.819622 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18fb2b12-f922-4976-8e05-6e78a8751456-combined-ca-bundle\") pod \"heat-db-sync-246z7\" (UID: \"18fb2b12-f922-4976-8e05-6e78a8751456\") " pod="openstack/heat-db-sync-246z7" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.844019 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8hsn\" (UniqueName: \"kubernetes.io/projected/18fb2b12-f922-4976-8e05-6e78a8751456-kube-api-access-r8hsn\") pod \"heat-db-sync-246z7\" (UID: \"18fb2b12-f922-4976-8e05-6e78a8751456\") " pod="openstack/heat-db-sync-246z7" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.908432 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87zjm\" (UniqueName: \"kubernetes.io/projected/9c973bde-ff14-4cce-9f9c-57354dbd4adb-kube-api-access-87zjm\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.908625 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-scripts\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.908660 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-db-sync-config-data\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.908677 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-combined-ca-bundle\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.908825 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9c973bde-ff14-4cce-9f9c-57354dbd4adb-etc-machine-id\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.908931 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-config-data\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.924716 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-l4ptr"] Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.984213 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-mklx7"] Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.986042 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-mklx7" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.989137 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-246z7" Feb 14 04:31:04 crc kubenswrapper[4867]: I0214 04:31:04.989154 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-p86vr" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.042094 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.046988 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87zjm\" (UniqueName: \"kubernetes.io/projected/9c973bde-ff14-4cce-9f9c-57354dbd4adb-kube-api-access-87zjm\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.047123 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-scripts\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.047167 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-db-sync-config-data\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.047207 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-combined-ca-bundle\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.047377 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9c973bde-ff14-4cce-9f9c-57354dbd4adb-etc-machine-id\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.047540 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-config-data\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.057552 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-config-data\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.085186 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-scripts\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.089473 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-db-sync-config-data\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.093649 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9c973bde-ff14-4cce-9f9c-57354dbd4adb-etc-machine-id\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.096261 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-combined-ca-bundle\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.098010 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87zjm\" (UniqueName: \"kubernetes.io/projected/9c973bde-ff14-4cce-9f9c-57354dbd4adb-kube-api-access-87zjm\") pod \"cinder-db-sync-grkqh\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.117545 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-425tq"] Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.118878 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-mklx7"] Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.118979 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-425tq" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.122472 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.122903 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-jbsbl" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.122925 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.126076 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-425tq"] Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.167900 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-8g8xm"] Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.168579 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x77fq\" (UniqueName: \"kubernetes.io/projected/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-kube-api-access-x77fq\") pod \"barbican-db-sync-mklx7\" (UID: \"cccb73cc-2b89-4363-b7ca-44dfa627d9f9\") " pod="openstack/barbican-db-sync-mklx7" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.169466 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-db-sync-config-data\") pod \"barbican-db-sync-mklx7\" (UID: \"cccb73cc-2b89-4363-b7ca-44dfa627d9f9\") " pod="openstack/barbican-db-sync-mklx7" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.169587 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-combined-ca-bundle\") pod \"barbican-db-sync-mklx7\" (UID: \"cccb73cc-2b89-4363-b7ca-44dfa627d9f9\") " pod="openstack/barbican-db-sync-mklx7" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.173178 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.188774 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-8g8xm"] Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.212984 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-9zrmj"] Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.215144 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.219767 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.220149 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-jvmrs" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.220461 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.263168 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-9zrmj"] Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.283866 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.283989 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x77fq\" (UniqueName: \"kubernetes.io/projected/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-kube-api-access-x77fq\") pod \"barbican-db-sync-mklx7\" (UID: \"cccb73cc-2b89-4363-b7ca-44dfa627d9f9\") " pod="openstack/barbican-db-sync-mklx7" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.284070 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-config\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.284830 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwnbs\" (UniqueName: \"kubernetes.io/projected/5cef8824-386a-4c20-a176-e1964d5307f7-kube-api-access-wwnbs\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.285009 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-db-sync-config-data\") pod \"barbican-db-sync-mklx7\" (UID: \"cccb73cc-2b89-4363-b7ca-44dfa627d9f9\") " pod="openstack/barbican-db-sync-mklx7" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.287205 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-combined-ca-bundle\") pod \"barbican-db-sync-mklx7\" (UID: \"cccb73cc-2b89-4363-b7ca-44dfa627d9f9\") " pod="openstack/barbican-db-sync-mklx7" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.287287 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.287362 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ed6edd10-56a9-4431-bb38-7b266f802e63-config\") pod \"neutron-db-sync-425tq\" (UID: \"ed6edd10-56a9-4431-bb38-7b266f802e63\") " pod="openstack/neutron-db-sync-425tq" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.287395 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed6edd10-56a9-4431-bb38-7b266f802e63-combined-ca-bundle\") pod \"neutron-db-sync-425tq\" (UID: \"ed6edd10-56a9-4431-bb38-7b266f802e63\") " pod="openstack/neutron-db-sync-425tq" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.287477 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzcfw\" (UniqueName: \"kubernetes.io/projected/ed6edd10-56a9-4431-bb38-7b266f802e63-kube-api-access-fzcfw\") pod \"neutron-db-sync-425tq\" (UID: \"ed6edd10-56a9-4431-bb38-7b266f802e63\") " pod="openstack/neutron-db-sync-425tq" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.287500 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.287583 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.290901 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-db-sync-config-data\") pod \"barbican-db-sync-mklx7\" (UID: \"cccb73cc-2b89-4363-b7ca-44dfa627d9f9\") " pod="openstack/barbican-db-sync-mklx7" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.291017 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-combined-ca-bundle\") pod \"barbican-db-sync-mklx7\" (UID: \"cccb73cc-2b89-4363-b7ca-44dfa627d9f9\") " pod="openstack/barbican-db-sync-mklx7" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.303900 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.306322 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.318427 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.320390 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x77fq\" (UniqueName: \"kubernetes.io/projected/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-kube-api-access-x77fq\") pod \"barbican-db-sync-mklx7\" (UID: \"cccb73cc-2b89-4363-b7ca-44dfa627d9f9\") " pod="openstack/barbican-db-sync-mklx7" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.328851 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.329155 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.385488 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.412640 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-config\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.412818 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-config-data\") pod \"placement-db-sync-9zrmj\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.412866 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-combined-ca-bundle\") pod \"placement-db-sync-9zrmj\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.412946 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwnbs\" (UniqueName: \"kubernetes.io/projected/5cef8824-386a-4c20-a176-e1964d5307f7-kube-api-access-wwnbs\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.413022 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffefbab2-8288-4eaa-9df3-e95383cdf19d-logs\") pod \"placement-db-sync-9zrmj\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.413084 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.424190 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-mklx7" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.424386 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-config\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.425297 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ed6edd10-56a9-4431-bb38-7b266f802e63-config\") pod \"neutron-db-sync-425tq\" (UID: \"ed6edd10-56a9-4431-bb38-7b266f802e63\") " pod="openstack/neutron-db-sync-425tq" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.425347 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed6edd10-56a9-4431-bb38-7b266f802e63-combined-ca-bundle\") pod \"neutron-db-sync-425tq\" (UID: \"ed6edd10-56a9-4431-bb38-7b266f802e63\") " pod="openstack/neutron-db-sync-425tq" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.425406 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzcfw\" (UniqueName: \"kubernetes.io/projected/ed6edd10-56a9-4431-bb38-7b266f802e63-kube-api-access-fzcfw\") pod \"neutron-db-sync-425tq\" (UID: \"ed6edd10-56a9-4431-bb38-7b266f802e63\") " pod="openstack/neutron-db-sync-425tq" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.425424 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.425462 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cmn6\" (UniqueName: \"kubernetes.io/projected/ffefbab2-8288-4eaa-9df3-e95383cdf19d-kube-api-access-2cmn6\") pod \"placement-db-sync-9zrmj\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.425493 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.425594 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.425671 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-scripts\") pod \"placement-db-sync-9zrmj\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.426948 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.429140 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.430380 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.434682 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.453578 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed6edd10-56a9-4431-bb38-7b266f802e63-combined-ca-bundle\") pod \"neutron-db-sync-425tq\" (UID: \"ed6edd10-56a9-4431-bb38-7b266f802e63\") " pod="openstack/neutron-db-sync-425tq" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.483185 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ed6edd10-56a9-4431-bb38-7b266f802e63-config\") pod \"neutron-db-sync-425tq\" (UID: \"ed6edd10-56a9-4431-bb38-7b266f802e63\") " pod="openstack/neutron-db-sync-425tq" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.503271 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzcfw\" (UniqueName: \"kubernetes.io/projected/ed6edd10-56a9-4431-bb38-7b266f802e63-kube-api-access-fzcfw\") pod \"neutron-db-sync-425tq\" (UID: \"ed6edd10-56a9-4431-bb38-7b266f802e63\") " pod="openstack/neutron-db-sync-425tq" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.532976 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwnbs\" (UniqueName: \"kubernetes.io/projected/5cef8824-386a-4c20-a176-e1964d5307f7-kube-api-access-wwnbs\") pod \"dnsmasq-dns-785d8bcb8c-8g8xm\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.538857 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f83c90-35bd-4d40-90e4-f992c7844a5d-run-httpd\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.538928 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cmn6\" (UniqueName: \"kubernetes.io/projected/ffefbab2-8288-4eaa-9df3-e95383cdf19d-kube-api-access-2cmn6\") pod \"placement-db-sync-9zrmj\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.538978 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-config-data\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.538997 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tmkx\" (UniqueName: \"kubernetes.io/projected/20f83c90-35bd-4d40-90e4-f992c7844a5d-kube-api-access-6tmkx\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.539018 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-scripts\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.539039 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.539073 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-scripts\") pod \"placement-db-sync-9zrmj\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.539119 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f83c90-35bd-4d40-90e4-f992c7844a5d-log-httpd\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.539140 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-config-data\") pod \"placement-db-sync-9zrmj\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.539157 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-combined-ca-bundle\") pod \"placement-db-sync-9zrmj\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.539195 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.539217 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffefbab2-8288-4eaa-9df3-e95383cdf19d-logs\") pod \"placement-db-sync-9zrmj\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.541618 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffefbab2-8288-4eaa-9df3-e95383cdf19d-logs\") pod \"placement-db-sync-9zrmj\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.546056 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-scripts\") pod \"placement-db-sync-9zrmj\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.547190 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-combined-ca-bundle\") pod \"placement-db-sync-9zrmj\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.557451 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-config-data\") pod \"placement-db-sync-9zrmj\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.586472 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cmn6\" (UniqueName: \"kubernetes.io/projected/ffefbab2-8288-4eaa-9df3-e95383cdf19d-kube-api-access-2cmn6\") pod \"placement-db-sync-9zrmj\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.646254 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f83c90-35bd-4d40-90e4-f992c7844a5d-log-httpd\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.646324 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.646392 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f83c90-35bd-4d40-90e4-f992c7844a5d-run-httpd\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.646436 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-config-data\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.646455 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tmkx\" (UniqueName: \"kubernetes.io/projected/20f83c90-35bd-4d40-90e4-f992c7844a5d-kube-api-access-6tmkx\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.646480 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-scripts\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.646518 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.649325 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f83c90-35bd-4d40-90e4-f992c7844a5d-run-httpd\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.649830 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f83c90-35bd-4d40-90e4-f992c7844a5d-log-httpd\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.654736 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-scripts\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.658072 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.658540 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-config-data\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.668311 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.671421 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tmkx\" (UniqueName: \"kubernetes.io/projected/20f83c90-35bd-4d40-90e4-f992c7844a5d-kube-api-access-6tmkx\") pod \"ceilometer-0\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.736138 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-l4ptr"] Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.753540 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-425tq" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.794068 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mvxwt"] Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.805983 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.810912 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.814044 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.817998 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.818227 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-vtnl4" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.823134 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.825366 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.825701 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.850963 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.863785 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:05 crc kubenswrapper[4867]: W0214 04:31:05.865972 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18fb2b12_f922_4976_8e05_6e78a8751456.slice/crio-9289cefc22342b7fc66aa673bbc9c4e9b6d16e205beb2daae9082d5d1e900eff WatchSource:0}: Error finding container 9289cefc22342b7fc66aa673bbc9c4e9b6d16e205beb2daae9082d5d1e900eff: Status 404 returned error can't find the container with id 9289cefc22342b7fc66aa673bbc9c4e9b6d16e205beb2daae9082d5d1e900eff Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.866625 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.866851 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.869025 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.869253 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.879321 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-246z7"] Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.945007 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.970247 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.970529 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.970559 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.970581 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/07039199-dee5-4a0b-ae25-6eebf0cdc70b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.970611 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f999df8e-7024-489e-ab2a-6b849be2f6ef-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.970648 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f999df8e-7024-489e-ab2a-6b849be2f6ef-logs\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.970677 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.970707 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-config-data\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.970766 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.970799 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-scripts\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.970845 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.970862 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs4zc\" (UniqueName: \"kubernetes.io/projected/f999df8e-7024-489e-ab2a-6b849be2f6ef-kube-api-access-vs4zc\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.970904 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.970921 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.970943 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07039199-dee5-4a0b-ae25-6eebf0cdc70b-logs\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:05 crc kubenswrapper[4867]: I0214 04:31:05.970957 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72f4g\" (UniqueName: \"kubernetes.io/projected/07039199-dee5-4a0b-ae25-6eebf0cdc70b-kube-api-access-72f4g\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.073324 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.073371 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-scripts\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.073418 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.073434 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs4zc\" (UniqueName: \"kubernetes.io/projected/f999df8e-7024-489e-ab2a-6b849be2f6ef-kube-api-access-vs4zc\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.073475 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.073498 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.073535 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07039199-dee5-4a0b-ae25-6eebf0cdc70b-logs\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.073550 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72f4g\" (UniqueName: \"kubernetes.io/projected/07039199-dee5-4a0b-ae25-6eebf0cdc70b-kube-api-access-72f4g\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.073598 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.073617 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.073639 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.073656 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/07039199-dee5-4a0b-ae25-6eebf0cdc70b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.073678 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f999df8e-7024-489e-ab2a-6b849be2f6ef-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.073711 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f999df8e-7024-489e-ab2a-6b849be2f6ef-logs\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.073744 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.073768 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-config-data\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.078614 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f999df8e-7024-489e-ab2a-6b849be2f6ef-logs\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.078668 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07039199-dee5-4a0b-ae25-6eebf0cdc70b-logs\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.078850 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f999df8e-7024-489e-ab2a-6b849be2f6ef-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.079715 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/07039199-dee5-4a0b-ae25-6eebf0cdc70b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.101305 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.101362 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2911fee5623424610909110255172e6a670235da2c51b706f28d869aaa21b2f4/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.103864 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.104020 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/75d9da1254ce7e619341632ffa065d218ee4aa27b9558c722e4cc97bdf7e072d/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.112948 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.115742 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72f4g\" (UniqueName: \"kubernetes.io/projected/07039199-dee5-4a0b-ae25-6eebf0cdc70b-kube-api-access-72f4g\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.116652 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-scripts\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.121117 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.121193 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.121372 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.122766 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs4zc\" (UniqueName: \"kubernetes.io/projected/f999df8e-7024-489e-ab2a-6b849be2f6ef-kube-api-access-vs4zc\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.124914 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.125408 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-config-data\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.126024 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.156797 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-grkqh"] Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.210059 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-246z7" event={"ID":"18fb2b12-f922-4976-8e05-6e78a8751456","Type":"ContainerStarted","Data":"9289cefc22342b7fc66aa673bbc9c4e9b6d16e205beb2daae9082d5d1e900eff"} Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.240190 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mvxwt" event={"ID":"c94481eb-b5a1-40d6-86ea-623f39b63b92","Type":"ContainerStarted","Data":"f89dad4a87be20772a4f4fed951cb674eab08ab883a7cf25710c335ef40caf93"} Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.240250 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mvxwt" event={"ID":"c94481eb-b5a1-40d6-86ea-623f39b63b92","Type":"ContainerStarted","Data":"2fe3568a18d856985ad42eea1fdcd371f9fdb6e3f7cdf19c846cbc99fc4366df"} Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.272471 4867 generic.go:334] "Generic (PLEG): container finished" podID="2b19d645-1c0b-4b85-a052-d90851f5f063" containerID="457cca977bf31867430732e0f7dc34d7da68ead872f10800d0e04226f49fdbbc" exitCode=0 Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.272537 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" event={"ID":"2b19d645-1c0b-4b85-a052-d90851f5f063","Type":"ContainerDied","Data":"457cca977bf31867430732e0f7dc34d7da68ead872f10800d0e04226f49fdbbc"} Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.272567 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" event={"ID":"2b19d645-1c0b-4b85-a052-d90851f5f063","Type":"ContainerStarted","Data":"431b7a707179dbdb628432b420ce048e47de472cc8d7794e6aaafcbf07fdc73a"} Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.309743 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") pod \"glance-default-external-api-0\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.310184 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") pod \"glance-default-internal-api-0\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.334435 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-mvxwt" podStartSLOduration=2.334416463 podStartE2EDuration="2.334416463s" podCreationTimestamp="2026-02-14 04:31:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:31:06.286178645 +0000 UTC m=+1298.367115949" watchObservedRunningTime="2026-02-14 04:31:06.334416463 +0000 UTC m=+1298.415353777" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.489542 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-mklx7"] Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.506690 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.529137 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.763578 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.809627 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-9zrmj"] Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.875730 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 04:31:06 crc kubenswrapper[4867]: I0214 04:31:06.897426 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-425tq"] Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.214239 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.263966 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.310091 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.340601 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-mklx7" event={"ID":"cccb73cc-2b89-4363-b7ca-44dfa627d9f9","Type":"ContainerStarted","Data":"f1bbb81d52303ed15cfa9fbfd73e50a998ea92e54eddc8748836c35a398ce9c1"} Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.346523 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f83c90-35bd-4d40-90e4-f992c7844a5d","Type":"ContainerStarted","Data":"fec759d47361c43e0a7e0280d89486799080a9e793713da877ee4655c98870f4"} Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.349337 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9zrmj" event={"ID":"ffefbab2-8288-4eaa-9df3-e95383cdf19d","Type":"ContainerStarted","Data":"b409bcffdfa5ea471959aecebea943d810c68abab172eab94ceaa2964168c2d8"} Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.358745 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-grkqh" event={"ID":"9c973bde-ff14-4cce-9f9c-57354dbd4adb","Type":"ContainerStarted","Data":"b3a7579e2ea00af7974e6f233c7249ba1f5d8c4ed824a86714e0fb4c62e7eb90"} Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.366978 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-425tq" event={"ID":"ed6edd10-56a9-4431-bb38-7b266f802e63","Type":"ContainerStarted","Data":"d78bdf76524edb85205e3ac00a9a89a4911b2fe692381100ea6ca9ff406ccaef"} Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.412115 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-8g8xm"] Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.452333 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-config\") pod \"2b19d645-1c0b-4b85-a052-d90851f5f063\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.452426 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-dns-swift-storage-0\") pod \"2b19d645-1c0b-4b85-a052-d90851f5f063\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.452488 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jp48t\" (UniqueName: \"kubernetes.io/projected/2b19d645-1c0b-4b85-a052-d90851f5f063-kube-api-access-jp48t\") pod \"2b19d645-1c0b-4b85-a052-d90851f5f063\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.452524 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-dns-svc\") pod \"2b19d645-1c0b-4b85-a052-d90851f5f063\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.452553 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-ovsdbserver-nb\") pod \"2b19d645-1c0b-4b85-a052-d90851f5f063\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.452692 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-ovsdbserver-sb\") pod \"2b19d645-1c0b-4b85-a052-d90851f5f063\" (UID: \"2b19d645-1c0b-4b85-a052-d90851f5f063\") " Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.462655 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b19d645-1c0b-4b85-a052-d90851f5f063-kube-api-access-jp48t" (OuterVolumeSpecName: "kube-api-access-jp48t") pod "2b19d645-1c0b-4b85-a052-d90851f5f063" (UID: "2b19d645-1c0b-4b85-a052-d90851f5f063"). InnerVolumeSpecName "kube-api-access-jp48t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.494557 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2b19d645-1c0b-4b85-a052-d90851f5f063" (UID: "2b19d645-1c0b-4b85-a052-d90851f5f063"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.528299 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2b19d645-1c0b-4b85-a052-d90851f5f063" (UID: "2b19d645-1c0b-4b85-a052-d90851f5f063"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.529169 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-config" (OuterVolumeSpecName: "config") pod "2b19d645-1c0b-4b85-a052-d90851f5f063" (UID: "2b19d645-1c0b-4b85-a052-d90851f5f063"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.538945 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2b19d645-1c0b-4b85-a052-d90851f5f063" (UID: "2b19d645-1c0b-4b85-a052-d90851f5f063"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.541870 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2b19d645-1c0b-4b85-a052-d90851f5f063" (UID: "2b19d645-1c0b-4b85-a052-d90851f5f063"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.562623 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.562664 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.562674 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.562686 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jp48t\" (UniqueName: \"kubernetes.io/projected/2b19d645-1c0b-4b85-a052-d90851f5f063-kube-api-access-jp48t\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.562699 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.562711 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b19d645-1c0b-4b85-a052-d90851f5f063-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.747167 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 04:31:07 crc kubenswrapper[4867]: I0214 04:31:07.918105 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 04:31:08 crc kubenswrapper[4867]: I0214 04:31:08.405827 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f999df8e-7024-489e-ab2a-6b849be2f6ef","Type":"ContainerStarted","Data":"c53135ac12ac40ad101becf3cef02ee975e00b3bf3f0a6d25b7a38ce50c3d5b8"} Feb 14 04:31:08 crc kubenswrapper[4867]: I0214 04:31:08.408982 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" event={"ID":"2b19d645-1c0b-4b85-a052-d90851f5f063","Type":"ContainerDied","Data":"431b7a707179dbdb628432b420ce048e47de472cc8d7794e6aaafcbf07fdc73a"} Feb 14 04:31:08 crc kubenswrapper[4867]: I0214 04:31:08.409040 4867 scope.go:117] "RemoveContainer" containerID="457cca977bf31867430732e0f7dc34d7da68ead872f10800d0e04226f49fdbbc" Feb 14 04:31:08 crc kubenswrapper[4867]: I0214 04:31:08.409267 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-l4ptr" Feb 14 04:31:08 crc kubenswrapper[4867]: I0214 04:31:08.413742 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-425tq" event={"ID":"ed6edd10-56a9-4431-bb38-7b266f802e63","Type":"ContainerStarted","Data":"b4af422ec473bd7a3a6d6b89b2e7229c4375e35cf75e8494db638d7095f07468"} Feb 14 04:31:08 crc kubenswrapper[4867]: I0214 04:31:08.431469 4867 generic.go:334] "Generic (PLEG): container finished" podID="5cef8824-386a-4c20-a176-e1964d5307f7" containerID="89e26c09a3c28860cf0f6c1bbbca98899e7df18ff66c3a51b5fe47e68eaecb95" exitCode=0 Feb 14 04:31:08 crc kubenswrapper[4867]: I0214 04:31:08.431634 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" event={"ID":"5cef8824-386a-4c20-a176-e1964d5307f7","Type":"ContainerDied","Data":"89e26c09a3c28860cf0f6c1bbbca98899e7df18ff66c3a51b5fe47e68eaecb95"} Feb 14 04:31:08 crc kubenswrapper[4867]: I0214 04:31:08.431704 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" event={"ID":"5cef8824-386a-4c20-a176-e1964d5307f7","Type":"ContainerStarted","Data":"a8af3c3243557785237b106c328a49ec8c7419d5a57f62a13b9820888d0db44a"} Feb 14 04:31:08 crc kubenswrapper[4867]: I0214 04:31:08.441551 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-425tq" podStartSLOduration=4.4415376890000005 podStartE2EDuration="4.441537689s" podCreationTimestamp="2026-02-14 04:31:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:31:08.439329151 +0000 UTC m=+1300.520266465" watchObservedRunningTime="2026-02-14 04:31:08.441537689 +0000 UTC m=+1300.522475003" Feb 14 04:31:08 crc kubenswrapper[4867]: I0214 04:31:08.442690 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"07039199-dee5-4a0b-ae25-6eebf0cdc70b","Type":"ContainerStarted","Data":"19177c982d8b998d8f576b4d6e1419b99adc37a7e5ffb6a3e9444f6ef274bbde"} Feb 14 04:31:08 crc kubenswrapper[4867]: I0214 04:31:08.541225 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-l4ptr"] Feb 14 04:31:08 crc kubenswrapper[4867]: I0214 04:31:08.578460 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-l4ptr"] Feb 14 04:31:09 crc kubenswrapper[4867]: I0214 04:31:09.054454 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b19d645-1c0b-4b85-a052-d90851f5f063" path="/var/lib/kubelet/pods/2b19d645-1c0b-4b85-a052-d90851f5f063/volumes" Feb 14 04:31:09 crc kubenswrapper[4867]: I0214 04:31:09.511298 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" event={"ID":"5cef8824-386a-4c20-a176-e1964d5307f7","Type":"ContainerStarted","Data":"5a32c2ef4aa73a15ff81381551f9faad42ec662d2800f7b41bd9d12693968e44"} Feb 14 04:31:09 crc kubenswrapper[4867]: I0214 04:31:09.512967 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:09 crc kubenswrapper[4867]: I0214 04:31:09.521892 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"07039199-dee5-4a0b-ae25-6eebf0cdc70b","Type":"ContainerStarted","Data":"6388af96a9e8cd26ae554c99b13aa233ce10e1dc8de2f02a6f674fb4e51e6bd3"} Feb 14 04:31:09 crc kubenswrapper[4867]: I0214 04:31:09.530541 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f999df8e-7024-489e-ab2a-6b849be2f6ef","Type":"ContainerStarted","Data":"509fb90f4e6334b9685b885ef46fd5f42dffc3b95cc1b48b90fc4906b6403562"} Feb 14 04:31:09 crc kubenswrapper[4867]: I0214 04:31:09.540845 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" podStartSLOduration=5.540828449 podStartE2EDuration="5.540828449s" podCreationTimestamp="2026-02-14 04:31:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:31:09.531244425 +0000 UTC m=+1301.612181759" watchObservedRunningTime="2026-02-14 04:31:09.540828449 +0000 UTC m=+1301.621765773" Feb 14 04:31:10 crc kubenswrapper[4867]: I0214 04:31:10.597458 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"07039199-dee5-4a0b-ae25-6eebf0cdc70b","Type":"ContainerStarted","Data":"15ed364b0a49f81fd4949fca04378cd1d1cf5fcd161d0b8180bec6ace68b75fa"} Feb 14 04:31:10 crc kubenswrapper[4867]: I0214 04:31:10.598133 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="07039199-dee5-4a0b-ae25-6eebf0cdc70b" containerName="glance-log" containerID="cri-o://6388af96a9e8cd26ae554c99b13aa233ce10e1dc8de2f02a6f674fb4e51e6bd3" gracePeriod=30 Feb 14 04:31:10 crc kubenswrapper[4867]: I0214 04:31:10.598757 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="07039199-dee5-4a0b-ae25-6eebf0cdc70b" containerName="glance-httpd" containerID="cri-o://15ed364b0a49f81fd4949fca04378cd1d1cf5fcd161d0b8180bec6ace68b75fa" gracePeriod=30 Feb 14 04:31:10 crc kubenswrapper[4867]: I0214 04:31:10.609984 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f999df8e-7024-489e-ab2a-6b849be2f6ef","Type":"ContainerStarted","Data":"69544341c5ca0c8dd1de9f8750f822d8a653543dcc8f00f4deed22c84b48df5d"} Feb 14 04:31:10 crc kubenswrapper[4867]: I0214 04:31:10.610259 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f999df8e-7024-489e-ab2a-6b849be2f6ef" containerName="glance-log" containerID="cri-o://509fb90f4e6334b9685b885ef46fd5f42dffc3b95cc1b48b90fc4906b6403562" gracePeriod=30 Feb 14 04:31:10 crc kubenswrapper[4867]: I0214 04:31:10.610326 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f999df8e-7024-489e-ab2a-6b849be2f6ef" containerName="glance-httpd" containerID="cri-o://69544341c5ca0c8dd1de9f8750f822d8a653543dcc8f00f4deed22c84b48df5d" gracePeriod=30 Feb 14 04:31:10 crc kubenswrapper[4867]: I0214 04:31:10.643118 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.643051916 podStartE2EDuration="6.643051916s" podCreationTimestamp="2026-02-14 04:31:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:31:10.626985211 +0000 UTC m=+1302.707922525" watchObservedRunningTime="2026-02-14 04:31:10.643051916 +0000 UTC m=+1302.723989230" Feb 14 04:31:10 crc kubenswrapper[4867]: I0214 04:31:10.672050 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.671754677 podStartE2EDuration="6.671754677s" podCreationTimestamp="2026-02-14 04:31:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:31:10.658377882 +0000 UTC m=+1302.739315206" watchObservedRunningTime="2026-02-14 04:31:10.671754677 +0000 UTC m=+1302.752691981" Feb 14 04:31:11 crc kubenswrapper[4867]: I0214 04:31:11.659877 4867 generic.go:334] "Generic (PLEG): container finished" podID="07039199-dee5-4a0b-ae25-6eebf0cdc70b" containerID="15ed364b0a49f81fd4949fca04378cd1d1cf5fcd161d0b8180bec6ace68b75fa" exitCode=0 Feb 14 04:31:11 crc kubenswrapper[4867]: I0214 04:31:11.660188 4867 generic.go:334] "Generic (PLEG): container finished" podID="07039199-dee5-4a0b-ae25-6eebf0cdc70b" containerID="6388af96a9e8cd26ae554c99b13aa233ce10e1dc8de2f02a6f674fb4e51e6bd3" exitCode=143 Feb 14 04:31:11 crc kubenswrapper[4867]: I0214 04:31:11.660035 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"07039199-dee5-4a0b-ae25-6eebf0cdc70b","Type":"ContainerDied","Data":"15ed364b0a49f81fd4949fca04378cd1d1cf5fcd161d0b8180bec6ace68b75fa"} Feb 14 04:31:11 crc kubenswrapper[4867]: I0214 04:31:11.660296 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"07039199-dee5-4a0b-ae25-6eebf0cdc70b","Type":"ContainerDied","Data":"6388af96a9e8cd26ae554c99b13aa233ce10e1dc8de2f02a6f674fb4e51e6bd3"} Feb 14 04:31:11 crc kubenswrapper[4867]: I0214 04:31:11.672795 4867 generic.go:334] "Generic (PLEG): container finished" podID="f999df8e-7024-489e-ab2a-6b849be2f6ef" containerID="69544341c5ca0c8dd1de9f8750f822d8a653543dcc8f00f4deed22c84b48df5d" exitCode=0 Feb 14 04:31:11 crc kubenswrapper[4867]: I0214 04:31:11.672826 4867 generic.go:334] "Generic (PLEG): container finished" podID="f999df8e-7024-489e-ab2a-6b849be2f6ef" containerID="509fb90f4e6334b9685b885ef46fd5f42dffc3b95cc1b48b90fc4906b6403562" exitCode=143 Feb 14 04:31:11 crc kubenswrapper[4867]: I0214 04:31:11.672931 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f999df8e-7024-489e-ab2a-6b849be2f6ef","Type":"ContainerDied","Data":"69544341c5ca0c8dd1de9f8750f822d8a653543dcc8f00f4deed22c84b48df5d"} Feb 14 04:31:11 crc kubenswrapper[4867]: I0214 04:31:11.672996 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f999df8e-7024-489e-ab2a-6b849be2f6ef","Type":"ContainerDied","Data":"509fb90f4e6334b9685b885ef46fd5f42dffc3b95cc1b48b90fc4906b6403562"} Feb 14 04:31:12 crc kubenswrapper[4867]: I0214 04:31:12.710644 4867 generic.go:334] "Generic (PLEG): container finished" podID="c94481eb-b5a1-40d6-86ea-623f39b63b92" containerID="f89dad4a87be20772a4f4fed951cb674eab08ab883a7cf25710c335ef40caf93" exitCode=0 Feb 14 04:31:12 crc kubenswrapper[4867]: I0214 04:31:12.710687 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mvxwt" event={"ID":"c94481eb-b5a1-40d6-86ea-623f39b63b92","Type":"ContainerDied","Data":"f89dad4a87be20772a4f4fed951cb674eab08ab883a7cf25710c335ef40caf93"} Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.750471 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"07039199-dee5-4a0b-ae25-6eebf0cdc70b","Type":"ContainerDied","Data":"19177c982d8b998d8f576b4d6e1419b99adc37a7e5ffb6a3e9444f6ef274bbde"} Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.751054 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19177c982d8b998d8f576b4d6e1419b99adc37a7e5ffb6a3e9444f6ef274bbde" Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.752678 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f999df8e-7024-489e-ab2a-6b849be2f6ef","Type":"ContainerDied","Data":"c53135ac12ac40ad101becf3cef02ee975e00b3bf3f0a6d25b7a38ce50c3d5b8"} Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.752715 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c53135ac12ac40ad101becf3cef02ee975e00b3bf3f0a6d25b7a38ce50c3d5b8" Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.754164 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mvxwt" event={"ID":"c94481eb-b5a1-40d6-86ea-623f39b63b92","Type":"ContainerDied","Data":"2fe3568a18d856985ad42eea1fdcd371f9fdb6e3f7cdf19c846cbc99fc4366df"} Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.754215 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fe3568a18d856985ad42eea1fdcd371f9fdb6e3f7cdf19c846cbc99fc4366df" Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.808488 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.811430 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.819388 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.925781 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.939164 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-credential-keys\") pod \"c94481eb-b5a1-40d6-86ea-623f39b63b92\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.939273 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-config-data\") pod \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.939299 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-combined-ca-bundle\") pod \"c94481eb-b5a1-40d6-86ea-623f39b63b92\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.939340 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-config-data\") pod \"c94481eb-b5a1-40d6-86ea-623f39b63b92\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.939427 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-scripts\") pod \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.939517 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/07039199-dee5-4a0b-ae25-6eebf0cdc70b-httpd-run\") pod \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.939575 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72f4g\" (UniqueName: \"kubernetes.io/projected/07039199-dee5-4a0b-ae25-6eebf0cdc70b-kube-api-access-72f4g\") pod \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.939630 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg9qz\" (UniqueName: \"kubernetes.io/projected/c94481eb-b5a1-40d6-86ea-623f39b63b92-kube-api-access-xg9qz\") pod \"c94481eb-b5a1-40d6-86ea-623f39b63b92\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.939656 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-fernet-keys\") pod \"c94481eb-b5a1-40d6-86ea-623f39b63b92\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.939700 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07039199-dee5-4a0b-ae25-6eebf0cdc70b-logs\") pod \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.939911 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") pod \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.939972 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-scripts\") pod \"c94481eb-b5a1-40d6-86ea-623f39b63b92\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.940037 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-public-tls-certs\") pod \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.940086 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-combined-ca-bundle\") pod \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\" (UID: \"07039199-dee5-4a0b-ae25-6eebf0cdc70b\") " Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.947404 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07039199-dee5-4a0b-ae25-6eebf0cdc70b-logs" (OuterVolumeSpecName: "logs") pod "07039199-dee5-4a0b-ae25-6eebf0cdc70b" (UID: "07039199-dee5-4a0b-ae25-6eebf0cdc70b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.953015 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07039199-dee5-4a0b-ae25-6eebf0cdc70b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "07039199-dee5-4a0b-ae25-6eebf0cdc70b" (UID: "07039199-dee5-4a0b-ae25-6eebf0cdc70b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.988167 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-shjcj"] Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.989142 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" podUID="34e3aca5-c7d4-4401-b301-1ab6497cb1d7" containerName="dnsmasq-dns" containerID="cri-o://42be2316b4ae343fcb4b814718eabf5f7933e5e7ed598513fca11b7935007ed3" gracePeriod=10 Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.991568 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-scripts" (OuterVolumeSpecName: "scripts") pod "07039199-dee5-4a0b-ae25-6eebf0cdc70b" (UID: "07039199-dee5-4a0b-ae25-6eebf0cdc70b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.993137 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07039199-dee5-4a0b-ae25-6eebf0cdc70b-kube-api-access-72f4g" (OuterVolumeSpecName: "kube-api-access-72f4g") pod "07039199-dee5-4a0b-ae25-6eebf0cdc70b" (UID: "07039199-dee5-4a0b-ae25-6eebf0cdc70b"). InnerVolumeSpecName "kube-api-access-72f4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.993458 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-scripts" (OuterVolumeSpecName: "scripts") pod "c94481eb-b5a1-40d6-86ea-623f39b63b92" (UID: "c94481eb-b5a1-40d6-86ea-623f39b63b92"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:15 crc kubenswrapper[4867]: I0214 04:31:15.996899 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c94481eb-b5a1-40d6-86ea-623f39b63b92" (UID: "c94481eb-b5a1-40d6-86ea-623f39b63b92"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.001691 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c94481eb-b5a1-40d6-86ea-623f39b63b92-kube-api-access-xg9qz" (OuterVolumeSpecName: "kube-api-access-xg9qz") pod "c94481eb-b5a1-40d6-86ea-623f39b63b92" (UID: "c94481eb-b5a1-40d6-86ea-623f39b63b92"). InnerVolumeSpecName "kube-api-access-xg9qz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.002396 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c94481eb-b5a1-40d6-86ea-623f39b63b92" (UID: "c94481eb-b5a1-40d6-86ea-623f39b63b92"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.031347 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c94481eb-b5a1-40d6-86ea-623f39b63b92" (UID: "c94481eb-b5a1-40d6-86ea-623f39b63b92"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.041922 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-config-data" (OuterVolumeSpecName: "config-data") pod "c94481eb-b5a1-40d6-86ea-623f39b63b92" (UID: "c94481eb-b5a1-40d6-86ea-623f39b63b92"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.042621 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-config-data\") pod \"f999df8e-7024-489e-ab2a-6b849be2f6ef\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.042811 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-config-data\") pod \"c94481eb-b5a1-40d6-86ea-623f39b63b92\" (UID: \"c94481eb-b5a1-40d6-86ea-623f39b63b92\") " Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.042837 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-scripts\") pod \"f999df8e-7024-489e-ab2a-6b849be2f6ef\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.042870 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-internal-tls-certs\") pod \"f999df8e-7024-489e-ab2a-6b849be2f6ef\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.042892 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f999df8e-7024-489e-ab2a-6b849be2f6ef-logs\") pod \"f999df8e-7024-489e-ab2a-6b849be2f6ef\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.042914 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-combined-ca-bundle\") pod \"f999df8e-7024-489e-ab2a-6b849be2f6ef\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.042940 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f999df8e-7024-489e-ab2a-6b849be2f6ef-httpd-run\") pod \"f999df8e-7024-489e-ab2a-6b849be2f6ef\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.042981 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vs4zc\" (UniqueName: \"kubernetes.io/projected/f999df8e-7024-489e-ab2a-6b849be2f6ef-kube-api-access-vs4zc\") pod \"f999df8e-7024-489e-ab2a-6b849be2f6ef\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.043138 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") pod \"f999df8e-7024-489e-ab2a-6b849be2f6ef\" (UID: \"f999df8e-7024-489e-ab2a-6b849be2f6ef\") " Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.043638 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg9qz\" (UniqueName: \"kubernetes.io/projected/c94481eb-b5a1-40d6-86ea-623f39b63b92-kube-api-access-xg9qz\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.043658 4867 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.043671 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07039199-dee5-4a0b-ae25-6eebf0cdc70b-logs\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.043682 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.043692 4867 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.043703 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.043716 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.043723 4867 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/07039199-dee5-4a0b-ae25-6eebf0cdc70b-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.043732 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72f4g\" (UniqueName: \"kubernetes.io/projected/07039199-dee5-4a0b-ae25-6eebf0cdc70b-kube-api-access-72f4g\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.043868 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f999df8e-7024-489e-ab2a-6b849be2f6ef-logs" (OuterVolumeSpecName: "logs") pod "f999df8e-7024-489e-ab2a-6b849be2f6ef" (UID: "f999df8e-7024-489e-ab2a-6b849be2f6ef"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: W0214 04:31:16.043952 4867 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/c94481eb-b5a1-40d6-86ea-623f39b63b92/volumes/kubernetes.io~secret/config-data Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.043961 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-config-data" (OuterVolumeSpecName: "config-data") pod "c94481eb-b5a1-40d6-86ea-623f39b63b92" (UID: "c94481eb-b5a1-40d6-86ea-623f39b63b92"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.053482 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d" (OuterVolumeSpecName: "glance") pod "07039199-dee5-4a0b-ae25-6eebf0cdc70b" (UID: "07039199-dee5-4a0b-ae25-6eebf0cdc70b"). InnerVolumeSpecName "pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.056745 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f999df8e-7024-489e-ab2a-6b849be2f6ef-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f999df8e-7024-489e-ab2a-6b849be2f6ef" (UID: "f999df8e-7024-489e-ab2a-6b849be2f6ef"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.063422 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f999df8e-7024-489e-ab2a-6b849be2f6ef-kube-api-access-vs4zc" (OuterVolumeSpecName: "kube-api-access-vs4zc") pod "f999df8e-7024-489e-ab2a-6b849be2f6ef" (UID: "f999df8e-7024-489e-ab2a-6b849be2f6ef"). InnerVolumeSpecName "kube-api-access-vs4zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.067380 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-scripts" (OuterVolumeSpecName: "scripts") pod "f999df8e-7024-489e-ab2a-6b849be2f6ef" (UID: "f999df8e-7024-489e-ab2a-6b849be2f6ef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.067715 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25" (OuterVolumeSpecName: "glance") pod "f999df8e-7024-489e-ab2a-6b849be2f6ef" (UID: "f999df8e-7024-489e-ab2a-6b849be2f6ef"). InnerVolumeSpecName "pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.089975 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-config-data" (OuterVolumeSpecName: "config-data") pod "07039199-dee5-4a0b-ae25-6eebf0cdc70b" (UID: "07039199-dee5-4a0b-ae25-6eebf0cdc70b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.093469 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07039199-dee5-4a0b-ae25-6eebf0cdc70b" (UID: "07039199-dee5-4a0b-ae25-6eebf0cdc70b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.101033 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "07039199-dee5-4a0b-ae25-6eebf0cdc70b" (UID: "07039199-dee5-4a0b-ae25-6eebf0cdc70b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.114851 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f999df8e-7024-489e-ab2a-6b849be2f6ef" (UID: "f999df8e-7024-489e-ab2a-6b849be2f6ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.136729 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-config-data" (OuterVolumeSpecName: "config-data") pod "f999df8e-7024-489e-ab2a-6b849be2f6ef" (UID: "f999df8e-7024-489e-ab2a-6b849be2f6ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.147942 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.147969 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.147978 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c94481eb-b5a1-40d6-86ea-623f39b63b92-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.148007 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.148032 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f999df8e-7024-489e-ab2a-6b849be2f6ef-logs\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.148040 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.148048 4867 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f999df8e-7024-489e-ab2a-6b849be2f6ef-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.148059 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vs4zc\" (UniqueName: \"kubernetes.io/projected/f999df8e-7024-489e-ab2a-6b849be2f6ef-kube-api-access-vs4zc\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.148188 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") on node \"crc\" " Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.148205 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") on node \"crc\" " Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.148237 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.148249 4867 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07039199-dee5-4a0b-ae25-6eebf0cdc70b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.155797 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f999df8e-7024-489e-ab2a-6b849be2f6ef" (UID: "f999df8e-7024-489e-ab2a-6b849be2f6ef"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.181681 4867 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.181859 4867 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25") on node "crc" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.210118 4867 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.210303 4867 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d") on node "crc" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.251438 4867 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f999df8e-7024-489e-ab2a-6b849be2f6ef-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.251483 4867 reconciler_common.go:293] "Volume detached for volume \"pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.251514 4867 reconciler_common.go:293] "Volume detached for volume \"pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.774992 4867 generic.go:334] "Generic (PLEG): container finished" podID="34e3aca5-c7d4-4401-b301-1ab6497cb1d7" containerID="42be2316b4ae343fcb4b814718eabf5f7933e5e7ed598513fca11b7935007ed3" exitCode=0 Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.775107 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" event={"ID":"34e3aca5-c7d4-4401-b301-1ab6497cb1d7","Type":"ContainerDied","Data":"42be2316b4ae343fcb4b814718eabf5f7933e5e7ed598513fca11b7935007ed3"} Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.775134 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mvxwt" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.775143 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.775239 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.833861 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.844686 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.865283 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.881756 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.897459 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 04:31:16 crc kubenswrapper[4867]: E0214 04:31:16.898024 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07039199-dee5-4a0b-ae25-6eebf0cdc70b" containerName="glance-log" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.898040 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="07039199-dee5-4a0b-ae25-6eebf0cdc70b" containerName="glance-log" Feb 14 04:31:16 crc kubenswrapper[4867]: E0214 04:31:16.898055 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07039199-dee5-4a0b-ae25-6eebf0cdc70b" containerName="glance-httpd" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.898061 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="07039199-dee5-4a0b-ae25-6eebf0cdc70b" containerName="glance-httpd" Feb 14 04:31:16 crc kubenswrapper[4867]: E0214 04:31:16.898074 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f999df8e-7024-489e-ab2a-6b849be2f6ef" containerName="glance-log" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.898080 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f999df8e-7024-489e-ab2a-6b849be2f6ef" containerName="glance-log" Feb 14 04:31:16 crc kubenswrapper[4867]: E0214 04:31:16.898101 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f999df8e-7024-489e-ab2a-6b849be2f6ef" containerName="glance-httpd" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.898108 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f999df8e-7024-489e-ab2a-6b849be2f6ef" containerName="glance-httpd" Feb 14 04:31:16 crc kubenswrapper[4867]: E0214 04:31:16.898131 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c94481eb-b5a1-40d6-86ea-623f39b63b92" containerName="keystone-bootstrap" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.898138 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c94481eb-b5a1-40d6-86ea-623f39b63b92" containerName="keystone-bootstrap" Feb 14 04:31:16 crc kubenswrapper[4867]: E0214 04:31:16.898157 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b19d645-1c0b-4b85-a052-d90851f5f063" containerName="init" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.898164 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b19d645-1c0b-4b85-a052-d90851f5f063" containerName="init" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.898350 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="07039199-dee5-4a0b-ae25-6eebf0cdc70b" containerName="glance-log" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.898361 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f999df8e-7024-489e-ab2a-6b849be2f6ef" containerName="glance-log" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.898374 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c94481eb-b5a1-40d6-86ea-623f39b63b92" containerName="keystone-bootstrap" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.898385 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f999df8e-7024-489e-ab2a-6b849be2f6ef" containerName="glance-httpd" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.898399 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b19d645-1c0b-4b85-a052-d90851f5f063" containerName="init" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.898412 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="07039199-dee5-4a0b-ae25-6eebf0cdc70b" containerName="glance-httpd" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.899584 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.907792 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-vtnl4" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.908223 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.908400 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.908574 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.911490 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.931739 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.944179 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.950136 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.951300 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.957914 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.968990 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.969045 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.969112 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.969133 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.969149 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.969204 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmjl2\" (UniqueName: \"kubernetes.io/projected/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-kube-api-access-qmjl2\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.969315 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-logs\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:16 crc kubenswrapper[4867]: I0214 04:31:16.969389 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.022601 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07039199-dee5-4a0b-ae25-6eebf0cdc70b" path="/var/lib/kubelet/pods/07039199-dee5-4a0b-ae25-6eebf0cdc70b/volumes" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.024328 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f999df8e-7024-489e-ab2a-6b849be2f6ef" path="/var/lib/kubelet/pods/f999df8e-7024-489e-ab2a-6b849be2f6ef/volumes" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.073607 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-mvxwt"] Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.074681 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.074784 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.074821 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.074844 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.074900 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.074928 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.075008 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmjl2\" (UniqueName: \"kubernetes.io/projected/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-kube-api-access-qmjl2\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.075058 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-logs\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.075149 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-scripts\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.075184 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.075212 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.075234 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-logs\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.075262 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncmbs\" (UniqueName: \"kubernetes.io/projected/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-kube-api-access-ncmbs\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.075290 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.075322 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-config-data\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.075377 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.075384 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.076409 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-logs\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.080774 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.080815 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/75d9da1254ce7e619341632ffa065d218ee4aa27b9558c722e4cc97bdf7e072d/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.081787 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.092569 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.092863 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.102831 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.103140 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmjl2\" (UniqueName: \"kubernetes.io/projected/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-kube-api-access-qmjl2\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.123722 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-mvxwt"] Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.140552 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-gdzwh"] Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.141288 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") pod \"glance-default-internal-api-0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.142118 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.147100 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.147260 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.147535 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.149157 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ffvbq" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.149485 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.164438 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-gdzwh"] Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.178565 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-scripts\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.178622 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.178649 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.178728 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-fernet-keys\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.178755 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85v7k\" (UniqueName: \"kubernetes.io/projected/87589008-b930-4698-b94b-883c707d5fb1-kube-api-access-85v7k\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.178772 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-credential-keys\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.178796 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-scripts\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.178817 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.178838 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.178857 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-logs\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.178876 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncmbs\" (UniqueName: \"kubernetes.io/projected/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-kube-api-access-ncmbs\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.178940 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-config-data\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.179070 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-combined-ca-bundle\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.179205 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-config-data\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.179386 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.185558 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-logs\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.186395 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-config-data\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.186625 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.186879 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.186905 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2911fee5623424610909110255172e6a670235da2c51b706f28d869aaa21b2f4/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.190856 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.196702 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-scripts\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.196888 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncmbs\" (UniqueName: \"kubernetes.io/projected/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-kube-api-access-ncmbs\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.232117 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") pod \"glance-default-external-api-0\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.252415 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.280452 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.280997 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-combined-ca-bundle\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.281075 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-config-data\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.281142 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-scripts\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.281214 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-fernet-keys\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.281246 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-credential-keys\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.281268 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85v7k\" (UniqueName: \"kubernetes.io/projected/87589008-b930-4698-b94b-883c707d5fb1-kube-api-access-85v7k\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.284797 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-scripts\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.285163 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-credential-keys\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.285176 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-config-data\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.286565 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-fernet-keys\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.288047 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-combined-ca-bundle\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.300077 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85v7k\" (UniqueName: \"kubernetes.io/projected/87589008-b930-4698-b94b-883c707d5fb1-kube-api-access-85v7k\") pod \"keystone-bootstrap-gdzwh\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:17 crc kubenswrapper[4867]: I0214 04:31:17.572944 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:19 crc kubenswrapper[4867]: I0214 04:31:19.013219 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c94481eb-b5a1-40d6-86ea-623f39b63b92" path="/var/lib/kubelet/pods/c94481eb-b5a1-40d6-86ea-623f39b63b92/volumes" Feb 14 04:31:19 crc kubenswrapper[4867]: I0214 04:31:19.585464 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" podUID="34e3aca5-c7d4-4401-b301-1ab6497cb1d7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.168:5353: connect: connection refused" Feb 14 04:31:23 crc kubenswrapper[4867]: E0214 04:31:23.959641 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Feb 14 04:31:23 crc kubenswrapper[4867]: E0214 04:31:23.960668 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2cmn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-9zrmj_openstack(ffefbab2-8288-4eaa-9df3-e95383cdf19d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:31:23 crc kubenswrapper[4867]: E0214 04:31:23.961867 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-9zrmj" podUID="ffefbab2-8288-4eaa-9df3-e95383cdf19d" Feb 14 04:31:24 crc kubenswrapper[4867]: I0214 04:31:24.585941 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" podUID="34e3aca5-c7d4-4401-b301-1ab6497cb1d7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.168:5353: connect: connection refused" Feb 14 04:31:24 crc kubenswrapper[4867]: E0214 04:31:24.863578 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-9zrmj" podUID="ffefbab2-8288-4eaa-9df3-e95383cdf19d" Feb 14 04:31:31 crc kubenswrapper[4867]: I0214 04:31:31.251186 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:31:31 crc kubenswrapper[4867]: I0214 04:31:31.251930 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:31:33 crc kubenswrapper[4867]: E0214 04:31:33.935827 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 14 04:31:33 crc kubenswrapper[4867]: E0214 04:31:33.936303 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x77fq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-mklx7_openstack(cccb73cc-2b89-4363-b7ca-44dfa627d9f9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:31:33 crc kubenswrapper[4867]: E0214 04:31:33.937597 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-mklx7" podUID="cccb73cc-2b89-4363-b7ca-44dfa627d9f9" Feb 14 04:31:33 crc kubenswrapper[4867]: I0214 04:31:33.953649 4867 generic.go:334] "Generic (PLEG): container finished" podID="ed6edd10-56a9-4431-bb38-7b266f802e63" containerID="b4af422ec473bd7a3a6d6b89b2e7229c4375e35cf75e8494db638d7095f07468" exitCode=0 Feb 14 04:31:33 crc kubenswrapper[4867]: I0214 04:31:33.953815 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-425tq" event={"ID":"ed6edd10-56a9-4431-bb38-7b266f802e63","Type":"ContainerDied","Data":"b4af422ec473bd7a3a6d6b89b2e7229c4375e35cf75e8494db638d7095f07468"} Feb 14 04:31:33 crc kubenswrapper[4867]: E0214 04:31:33.956139 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-mklx7" podUID="cccb73cc-2b89-4363-b7ca-44dfa627d9f9" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.082640 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.180102 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-ovsdbserver-nb\") pod \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.180268 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-dns-swift-storage-0\") pod \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.180300 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-config\") pod \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.180331 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-ovsdbserver-sb\") pod \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.180354 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-dns-svc\") pod \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.180375 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvg6h\" (UniqueName: \"kubernetes.io/projected/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-kube-api-access-bvg6h\") pod \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\" (UID: \"34e3aca5-c7d4-4401-b301-1ab6497cb1d7\") " Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.188189 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-kube-api-access-bvg6h" (OuterVolumeSpecName: "kube-api-access-bvg6h") pod "34e3aca5-c7d4-4401-b301-1ab6497cb1d7" (UID: "34e3aca5-c7d4-4401-b301-1ab6497cb1d7"). InnerVolumeSpecName "kube-api-access-bvg6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.234352 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-config" (OuterVolumeSpecName: "config") pod "34e3aca5-c7d4-4401-b301-1ab6497cb1d7" (UID: "34e3aca5-c7d4-4401-b301-1ab6497cb1d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.239216 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "34e3aca5-c7d4-4401-b301-1ab6497cb1d7" (UID: "34e3aca5-c7d4-4401-b301-1ab6497cb1d7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.246523 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "34e3aca5-c7d4-4401-b301-1ab6497cb1d7" (UID: "34e3aca5-c7d4-4401-b301-1ab6497cb1d7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.259946 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "34e3aca5-c7d4-4401-b301-1ab6497cb1d7" (UID: "34e3aca5-c7d4-4401-b301-1ab6497cb1d7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.264684 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "34e3aca5-c7d4-4401-b301-1ab6497cb1d7" (UID: "34e3aca5-c7d4-4401-b301-1ab6497cb1d7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.283481 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.283542 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.283557 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.283568 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.283579 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvg6h\" (UniqueName: \"kubernetes.io/projected/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-kube-api-access-bvg6h\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.283588 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34e3aca5-c7d4-4401-b301-1ab6497cb1d7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.586429 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" podUID="34e3aca5-c7d4-4401-b301-1ab6497cb1d7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.168:5353: i/o timeout" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.586553 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.967732 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.967800 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-shjcj" event={"ID":"34e3aca5-c7d4-4401-b301-1ab6497cb1d7","Type":"ContainerDied","Data":"ebbc4da8bb363e9a0155ec0e870c82eae82810ab31f3b604e5582d38957c9d4d"} Feb 14 04:31:34 crc kubenswrapper[4867]: I0214 04:31:34.967856 4867 scope.go:117] "RemoveContainer" containerID="42be2316b4ae343fcb4b814718eabf5f7933e5e7ed598513fca11b7935007ed3" Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.019300 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-shjcj"] Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.024258 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-shjcj"] Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.229687 4867 scope.go:117] "RemoveContainer" containerID="16409e89382c3b3bacc54f4af34e446329e86ddc39bf082ba4bf9fe2d118dfb6" Feb 14 04:31:35 crc kubenswrapper[4867]: E0214 04:31:35.279825 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 14 04:31:35 crc kubenswrapper[4867]: E0214 04:31:35.280000 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-87zjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-grkqh_openstack(9c973bde-ff14-4cce-9f9c-57354dbd4adb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:31:35 crc kubenswrapper[4867]: E0214 04:31:35.281146 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-grkqh" podUID="9c973bde-ff14-4cce-9f9c-57354dbd4adb" Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.561931 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-425tq" Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.641244 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ed6edd10-56a9-4431-bb38-7b266f802e63-config\") pod \"ed6edd10-56a9-4431-bb38-7b266f802e63\" (UID: \"ed6edd10-56a9-4431-bb38-7b266f802e63\") " Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.641359 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzcfw\" (UniqueName: \"kubernetes.io/projected/ed6edd10-56a9-4431-bb38-7b266f802e63-kube-api-access-fzcfw\") pod \"ed6edd10-56a9-4431-bb38-7b266f802e63\" (UID: \"ed6edd10-56a9-4431-bb38-7b266f802e63\") " Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.641543 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed6edd10-56a9-4431-bb38-7b266f802e63-combined-ca-bundle\") pod \"ed6edd10-56a9-4431-bb38-7b266f802e63\" (UID: \"ed6edd10-56a9-4431-bb38-7b266f802e63\") " Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.648913 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed6edd10-56a9-4431-bb38-7b266f802e63-kube-api-access-fzcfw" (OuterVolumeSpecName: "kube-api-access-fzcfw") pod "ed6edd10-56a9-4431-bb38-7b266f802e63" (UID: "ed6edd10-56a9-4431-bb38-7b266f802e63"). InnerVolumeSpecName "kube-api-access-fzcfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.649870 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzcfw\" (UniqueName: \"kubernetes.io/projected/ed6edd10-56a9-4431-bb38-7b266f802e63-kube-api-access-fzcfw\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.679200 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed6edd10-56a9-4431-bb38-7b266f802e63-config" (OuterVolumeSpecName: "config") pod "ed6edd10-56a9-4431-bb38-7b266f802e63" (UID: "ed6edd10-56a9-4431-bb38-7b266f802e63"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.684798 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed6edd10-56a9-4431-bb38-7b266f802e63-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed6edd10-56a9-4431-bb38-7b266f802e63" (UID: "ed6edd10-56a9-4431-bb38-7b266f802e63"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.711316 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-gdzwh"] Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.717795 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.751422 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ed6edd10-56a9-4431-bb38-7b266f802e63-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.751455 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed6edd10-56a9-4431-bb38-7b266f802e63-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.902910 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.990782 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-425tq" event={"ID":"ed6edd10-56a9-4431-bb38-7b266f802e63","Type":"ContainerDied","Data":"d78bdf76524edb85205e3ac00a9a89a4911b2fe692381100ea6ca9ff406ccaef"} Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.990836 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-425tq" Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.990858 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d78bdf76524edb85205e3ac00a9a89a4911b2fe692381100ea6ca9ff406ccaef" Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.992910 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"406727d4-ffca-4ade-b0ca-b5dbfcb23e24","Type":"ContainerStarted","Data":"a3cc1da73263e85bbf2b7d750ab646192fbf22c988007a55f775707de3030a59"} Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.995052 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gdzwh" event={"ID":"87589008-b930-4698-b94b-883c707d5fb1","Type":"ContainerStarted","Data":"42546acb8bf1d18a2013b6f620e8fb872f570e002bf0d9270838f9f12f95b201"} Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.995088 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gdzwh" event={"ID":"87589008-b930-4698-b94b-883c707d5fb1","Type":"ContainerStarted","Data":"884e46e9a9ccb7a1951c05016a7cfe503d95ce144f68a7a413c28878d0db0fb9"} Feb 14 04:31:35 crc kubenswrapper[4867]: I0214 04:31:35.999389 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f83c90-35bd-4d40-90e4-f992c7844a5d","Type":"ContainerStarted","Data":"12c007eaf3f2f0273b4b97ee67fcb41bee882cea55e4b7022e88e2bd510463b3"} Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.001377 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-246z7" event={"ID":"18fb2b12-f922-4976-8e05-6e78a8751456","Type":"ContainerStarted","Data":"60316f17511ab27fc3a729f8ccdd9f3a0822ad95a99d3ea5ac358cbcc6ece82a"} Feb 14 04:31:36 crc kubenswrapper[4867]: E0214 04:31:36.002965 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-grkqh" podUID="9c973bde-ff14-4cce-9f9c-57354dbd4adb" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.021760 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-gdzwh" podStartSLOduration=19.021740138 podStartE2EDuration="19.021740138s" podCreationTimestamp="2026-02-14 04:31:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:31:36.016998262 +0000 UTC m=+1328.097935576" watchObservedRunningTime="2026-02-14 04:31:36.021740138 +0000 UTC m=+1328.102677452" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.061373 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-246z7" podStartSLOduration=2.804693954 podStartE2EDuration="32.061348167s" podCreationTimestamp="2026-02-14 04:31:04 +0000 UTC" firstStartedPulling="2026-02-14 04:31:05.88675425 +0000 UTC m=+1297.967691564" lastFinishedPulling="2026-02-14 04:31:35.143408463 +0000 UTC m=+1327.224345777" observedRunningTime="2026-02-14 04:31:36.035070061 +0000 UTC m=+1328.116007375" watchObservedRunningTime="2026-02-14 04:31:36.061348167 +0000 UTC m=+1328.142285471" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.244203 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-zkb5z"] Feb 14 04:31:36 crc kubenswrapper[4867]: E0214 04:31:36.245044 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed6edd10-56a9-4431-bb38-7b266f802e63" containerName="neutron-db-sync" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.245065 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed6edd10-56a9-4431-bb38-7b266f802e63" containerName="neutron-db-sync" Feb 14 04:31:36 crc kubenswrapper[4867]: E0214 04:31:36.245109 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34e3aca5-c7d4-4401-b301-1ab6497cb1d7" containerName="dnsmasq-dns" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.245116 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="34e3aca5-c7d4-4401-b301-1ab6497cb1d7" containerName="dnsmasq-dns" Feb 14 04:31:36 crc kubenswrapper[4867]: E0214 04:31:36.245128 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34e3aca5-c7d4-4401-b301-1ab6497cb1d7" containerName="init" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.245134 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="34e3aca5-c7d4-4401-b301-1ab6497cb1d7" containerName="init" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.245328 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="34e3aca5-c7d4-4401-b301-1ab6497cb1d7" containerName="dnsmasq-dns" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.245361 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed6edd10-56a9-4431-bb38-7b266f802e63" containerName="neutron-db-sync" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.252042 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.267325 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-zkb5z"] Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.268719 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-dns-svc\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.268811 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.268869 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.268901 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-config\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.268922 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwkw4\" (UniqueName: \"kubernetes.io/projected/41682938-f603-460d-91e2-9de423799697-kube-api-access-bwkw4\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.268945 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.304713 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-74c5fcd7cb-sr8z9"] Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.307645 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.316334 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.316689 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-jbsbl" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.316831 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.341159 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.341697 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74c5fcd7cb-sr8z9"] Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.371530 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.371865 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-config\") pod \"neutron-74c5fcd7cb-sr8z9\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.371954 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-config\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.372074 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwkw4\" (UniqueName: \"kubernetes.io/projected/41682938-f603-460d-91e2-9de423799697-kube-api-access-bwkw4\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.372156 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-ovndb-tls-certs\") pod \"neutron-74c5fcd7cb-sr8z9\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.372232 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.372364 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prsd4\" (UniqueName: \"kubernetes.io/projected/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-kube-api-access-prsd4\") pod \"neutron-74c5fcd7cb-sr8z9\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.372483 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-dns-svc\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.372745 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.374161 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-dns-svc\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.374279 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.374856 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-config\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.379264 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-httpd-config\") pod \"neutron-74c5fcd7cb-sr8z9\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.379690 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-combined-ca-bundle\") pod \"neutron-74c5fcd7cb-sr8z9\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.380141 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.381323 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.409266 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwkw4\" (UniqueName: \"kubernetes.io/projected/41682938-f603-460d-91e2-9de423799697-kube-api-access-bwkw4\") pod \"dnsmasq-dns-55f844cf75-zkb5z\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.484081 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prsd4\" (UniqueName: \"kubernetes.io/projected/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-kube-api-access-prsd4\") pod \"neutron-74c5fcd7cb-sr8z9\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.484176 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-httpd-config\") pod \"neutron-74c5fcd7cb-sr8z9\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.484206 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-combined-ca-bundle\") pod \"neutron-74c5fcd7cb-sr8z9\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.484285 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-config\") pod \"neutron-74c5fcd7cb-sr8z9\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.484313 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-ovndb-tls-certs\") pod \"neutron-74c5fcd7cb-sr8z9\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.490340 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-httpd-config\") pod \"neutron-74c5fcd7cb-sr8z9\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.491696 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-ovndb-tls-certs\") pod \"neutron-74c5fcd7cb-sr8z9\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.494045 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-combined-ca-bundle\") pod \"neutron-74c5fcd7cb-sr8z9\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.518957 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-config\") pod \"neutron-74c5fcd7cb-sr8z9\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.519868 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prsd4\" (UniqueName: \"kubernetes.io/projected/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-kube-api-access-prsd4\") pod \"neutron-74c5fcd7cb-sr8z9\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.648016 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.658289 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:36 crc kubenswrapper[4867]: I0214 04:31:36.761140 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 04:31:36 crc kubenswrapper[4867]: W0214 04:31:36.780837 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f21b5d2_75e5_4cc5_96d0_670e9ed88df0.slice/crio-a058ad6cbd2191072dd3095571bbab2223991ccf0e5587286e857f99ac25261b WatchSource:0}: Error finding container a058ad6cbd2191072dd3095571bbab2223991ccf0e5587286e857f99ac25261b: Status 404 returned error can't find the container with id a058ad6cbd2191072dd3095571bbab2223991ccf0e5587286e857f99ac25261b Feb 14 04:31:37 crc kubenswrapper[4867]: I0214 04:31:37.052318 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34e3aca5-c7d4-4401-b301-1ab6497cb1d7" path="/var/lib/kubelet/pods/34e3aca5-c7d4-4401-b301-1ab6497cb1d7/volumes" Feb 14 04:31:37 crc kubenswrapper[4867]: I0214 04:31:37.075841 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0","Type":"ContainerStarted","Data":"a058ad6cbd2191072dd3095571bbab2223991ccf0e5587286e857f99ac25261b"} Feb 14 04:31:37 crc kubenswrapper[4867]: I0214 04:31:37.098965 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"406727d4-ffca-4ade-b0ca-b5dbfcb23e24","Type":"ContainerStarted","Data":"461e174da477dbbe46e48418e6c4b74717f5d942fc161f7932d038f71bf9aca1"} Feb 14 04:31:37 crc kubenswrapper[4867]: I0214 04:31:37.286724 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-zkb5z"] Feb 14 04:31:37 crc kubenswrapper[4867]: I0214 04:31:37.623889 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74c5fcd7cb-sr8z9"] Feb 14 04:31:38 crc kubenswrapper[4867]: I0214 04:31:38.120098 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" event={"ID":"41682938-f603-460d-91e2-9de423799697","Type":"ContainerStarted","Data":"fb9de469ce205f58ab8b9cb9fe410a6dc2ae4ce6eea561956a614622a54d90eb"} Feb 14 04:31:38 crc kubenswrapper[4867]: I0214 04:31:38.131416 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0","Type":"ContainerStarted","Data":"70953f2317efbfb87d7a56f4d71c52385c4847b32874288de71ce95ba977de9e"} Feb 14 04:31:38 crc kubenswrapper[4867]: I0214 04:31:38.148391 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"406727d4-ffca-4ade-b0ca-b5dbfcb23e24","Type":"ContainerStarted","Data":"12a1d2cb9718993931d34f7f092630cac049d31e66bb907373a6a9ebfd3b2034"} Feb 14 04:31:38 crc kubenswrapper[4867]: I0214 04:31:38.229871 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=22.229852759 podStartE2EDuration="22.229852759s" podCreationTimestamp="2026-02-14 04:31:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:31:38.216063074 +0000 UTC m=+1330.297000388" watchObservedRunningTime="2026-02-14 04:31:38.229852759 +0000 UTC m=+1330.310790073" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.155635 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-569c46898f-bbd5l"] Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.161944 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.163279 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f83c90-35bd-4d40-90e4-f992c7844a5d","Type":"ContainerStarted","Data":"5aef47de2b98909844392965ecce12a94c4a0b4e3f7b14facabcf28be59312be"} Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.166477 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.166785 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.171583 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c5fcd7cb-sr8z9" event={"ID":"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149","Type":"ContainerStarted","Data":"a00d0ebf0ff2de031204758114db4258ee7b4d688e4e3e8fcab6451b81a33050"} Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.171666 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c5fcd7cb-sr8z9" event={"ID":"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149","Type":"ContainerStarted","Data":"a3270a5cb491a003b02a8ff42a33368a493af6d0e24d1558f76c114ff7412184"} Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.171678 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c5fcd7cb-sr8z9" event={"ID":"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149","Type":"ContainerStarted","Data":"39d679b02b54e70585a87ea7dbf473acb26533d3e4ea7319177999bccaf06766"} Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.171729 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.174612 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-569c46898f-bbd5l"] Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.181708 4867 generic.go:334] "Generic (PLEG): container finished" podID="41682938-f603-460d-91e2-9de423799697" containerID="89d6a8bcac13fc998b43875a988468666140ff6de2472314fab3fcf4097c9cae" exitCode=0 Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.181786 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" event={"ID":"41682938-f603-460d-91e2-9de423799697","Type":"ContainerDied","Data":"89d6a8bcac13fc998b43875a988468666140ff6de2472314fab3fcf4097c9cae"} Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.186670 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0","Type":"ContainerStarted","Data":"784cfaee3c31733050d3a1efb21352103c907f523d29c5e564d74f7dfef79bf4"} Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.259722 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-74c5fcd7cb-sr8z9" podStartSLOduration=3.259692149 podStartE2EDuration="3.259692149s" podCreationTimestamp="2026-02-14 04:31:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:31:39.227863706 +0000 UTC m=+1331.308801020" watchObservedRunningTime="2026-02-14 04:31:39.259692149 +0000 UTC m=+1331.340629453" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.322322 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=23.322300608 podStartE2EDuration="23.322300608s" podCreationTimestamp="2026-02-14 04:31:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:31:39.277960603 +0000 UTC m=+1331.358897917" watchObservedRunningTime="2026-02-14 04:31:39.322300608 +0000 UTC m=+1331.403237922" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.350521 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-combined-ca-bundle\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.350570 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhvs2\" (UniqueName: \"kubernetes.io/projected/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-kube-api-access-lhvs2\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.350635 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-config\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.350667 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-public-tls-certs\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.350717 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-internal-tls-certs\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.350800 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-httpd-config\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.350833 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-ovndb-tls-certs\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.452988 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-internal-tls-certs\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.453071 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-httpd-config\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.453104 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-ovndb-tls-certs\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.453199 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-combined-ca-bundle\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.453238 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhvs2\" (UniqueName: \"kubernetes.io/projected/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-kube-api-access-lhvs2\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.453281 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-config\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.453309 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-public-tls-certs\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.460879 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-httpd-config\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.461284 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-config\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.462636 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-ovndb-tls-certs\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.466307 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-public-tls-certs\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.467217 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-internal-tls-certs\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.471494 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-combined-ca-bundle\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.475262 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhvs2\" (UniqueName: \"kubernetes.io/projected/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-kube-api-access-lhvs2\") pod \"neutron-569c46898f-bbd5l\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:39 crc kubenswrapper[4867]: I0214 04:31:39.640097 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:40 crc kubenswrapper[4867]: I0214 04:31:40.219259 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9zrmj" event={"ID":"ffefbab2-8288-4eaa-9df3-e95383cdf19d","Type":"ContainerStarted","Data":"cbc1c766da784a3e5453caf17699272e324db8e8f9f9c7202b12542f06aac4da"} Feb 14 04:31:40 crc kubenswrapper[4867]: I0214 04:31:40.254870 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" event={"ID":"41682938-f603-460d-91e2-9de423799697","Type":"ContainerStarted","Data":"3fa0ecdd88a94efe2f93d06bd0c02307c78ae77450f27f456086d11f4e56cff0"} Feb 14 04:31:40 crc kubenswrapper[4867]: I0214 04:31:40.254929 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:40 crc kubenswrapper[4867]: I0214 04:31:40.293715 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-9zrmj" podStartSLOduration=3.6896220250000002 podStartE2EDuration="36.293689719s" podCreationTimestamp="2026-02-14 04:31:04 +0000 UTC" firstStartedPulling="2026-02-14 04:31:06.991604808 +0000 UTC m=+1299.072542112" lastFinishedPulling="2026-02-14 04:31:39.595672502 +0000 UTC m=+1331.676609806" observedRunningTime="2026-02-14 04:31:40.259216105 +0000 UTC m=+1332.340153419" watchObservedRunningTime="2026-02-14 04:31:40.293689719 +0000 UTC m=+1332.374627033" Feb 14 04:31:40 crc kubenswrapper[4867]: I0214 04:31:40.321173 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" podStartSLOduration=4.321147937 podStartE2EDuration="4.321147937s" podCreationTimestamp="2026-02-14 04:31:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:31:40.293569536 +0000 UTC m=+1332.374506850" watchObservedRunningTime="2026-02-14 04:31:40.321147937 +0000 UTC m=+1332.402085251" Feb 14 04:31:40 crc kubenswrapper[4867]: I0214 04:31:40.349729 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-569c46898f-bbd5l"] Feb 14 04:31:41 crc kubenswrapper[4867]: I0214 04:31:41.271436 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-569c46898f-bbd5l" event={"ID":"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d","Type":"ContainerStarted","Data":"f445405ff2670ec25765e689c899369e6b86208982965111c8fd6b86edd2a3f9"} Feb 14 04:31:41 crc kubenswrapper[4867]: I0214 04:31:41.272151 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:31:41 crc kubenswrapper[4867]: I0214 04:31:41.272197 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-569c46898f-bbd5l" event={"ID":"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d","Type":"ContainerStarted","Data":"df38319c35b43b20a57003cff86a29347a0b01099020f21394a48e3029dd9a34"} Feb 14 04:31:41 crc kubenswrapper[4867]: I0214 04:31:41.272224 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-569c46898f-bbd5l" event={"ID":"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d","Type":"ContainerStarted","Data":"028f5efc08b53a55521858d44a43207730eee63dfa58503296592bae2f4868dd"} Feb 14 04:31:41 crc kubenswrapper[4867]: I0214 04:31:41.285647 4867 generic.go:334] "Generic (PLEG): container finished" podID="87589008-b930-4698-b94b-883c707d5fb1" containerID="42546acb8bf1d18a2013b6f620e8fb872f570e002bf0d9270838f9f12f95b201" exitCode=0 Feb 14 04:31:41 crc kubenswrapper[4867]: I0214 04:31:41.285797 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gdzwh" event={"ID":"87589008-b930-4698-b94b-883c707d5fb1","Type":"ContainerDied","Data":"42546acb8bf1d18a2013b6f620e8fb872f570e002bf0d9270838f9f12f95b201"} Feb 14 04:31:41 crc kubenswrapper[4867]: I0214 04:31:41.302601 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-569c46898f-bbd5l" podStartSLOduration=2.302577713 podStartE2EDuration="2.302577713s" podCreationTimestamp="2026-02-14 04:31:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:31:41.290168215 +0000 UTC m=+1333.371105549" watchObservedRunningTime="2026-02-14 04:31:41.302577713 +0000 UTC m=+1333.383515037" Feb 14 04:31:45 crc kubenswrapper[4867]: I0214 04:31:45.343090 4867 generic.go:334] "Generic (PLEG): container finished" podID="ffefbab2-8288-4eaa-9df3-e95383cdf19d" containerID="cbc1c766da784a3e5453caf17699272e324db8e8f9f9c7202b12542f06aac4da" exitCode=0 Feb 14 04:31:45 crc kubenswrapper[4867]: I0214 04:31:45.343197 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9zrmj" event={"ID":"ffefbab2-8288-4eaa-9df3-e95383cdf19d","Type":"ContainerDied","Data":"cbc1c766da784a3e5453caf17699272e324db8e8f9f9c7202b12542f06aac4da"} Feb 14 04:31:46 crc kubenswrapper[4867]: I0214 04:31:46.359050 4867 generic.go:334] "Generic (PLEG): container finished" podID="18fb2b12-f922-4976-8e05-6e78a8751456" containerID="60316f17511ab27fc3a729f8ccdd9f3a0822ad95a99d3ea5ac358cbcc6ece82a" exitCode=0 Feb 14 04:31:46 crc kubenswrapper[4867]: I0214 04:31:46.359104 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-246z7" event={"ID":"18fb2b12-f922-4976-8e05-6e78a8751456","Type":"ContainerDied","Data":"60316f17511ab27fc3a729f8ccdd9f3a0822ad95a99d3ea5ac358cbcc6ece82a"} Feb 14 04:31:46 crc kubenswrapper[4867]: I0214 04:31:46.649714 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:31:46 crc kubenswrapper[4867]: I0214 04:31:46.728951 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-8g8xm"] Feb 14 04:31:46 crc kubenswrapper[4867]: I0214 04:31:46.729263 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" podUID="5cef8824-386a-4c20-a176-e1964d5307f7" containerName="dnsmasq-dns" containerID="cri-o://5a32c2ef4aa73a15ff81381551f9faad42ec662d2800f7b41bd9d12693968e44" gracePeriod=10 Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.088630 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.109836 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.208971 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-scripts\") pod \"87589008-b930-4698-b94b-883c707d5fb1\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.209041 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85v7k\" (UniqueName: \"kubernetes.io/projected/87589008-b930-4698-b94b-883c707d5fb1-kube-api-access-85v7k\") pod \"87589008-b930-4698-b94b-883c707d5fb1\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.209065 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-combined-ca-bundle\") pod \"87589008-b930-4698-b94b-883c707d5fb1\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.209182 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cmn6\" (UniqueName: \"kubernetes.io/projected/ffefbab2-8288-4eaa-9df3-e95383cdf19d-kube-api-access-2cmn6\") pod \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.209233 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-fernet-keys\") pod \"87589008-b930-4698-b94b-883c707d5fb1\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.209354 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-config-data\") pod \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.209389 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-combined-ca-bundle\") pod \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.209441 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-config-data\") pod \"87589008-b930-4698-b94b-883c707d5fb1\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.209543 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffefbab2-8288-4eaa-9df3-e95383cdf19d-logs\") pod \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.209601 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-credential-keys\") pod \"87589008-b930-4698-b94b-883c707d5fb1\" (UID: \"87589008-b930-4698-b94b-883c707d5fb1\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.209647 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-scripts\") pod \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\" (UID: \"ffefbab2-8288-4eaa-9df3-e95383cdf19d\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.215044 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffefbab2-8288-4eaa-9df3-e95383cdf19d-logs" (OuterVolumeSpecName: "logs") pod "ffefbab2-8288-4eaa-9df3-e95383cdf19d" (UID: "ffefbab2-8288-4eaa-9df3-e95383cdf19d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.223649 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "87589008-b930-4698-b94b-883c707d5fb1" (UID: "87589008-b930-4698-b94b-883c707d5fb1"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.223734 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-scripts" (OuterVolumeSpecName: "scripts") pod "87589008-b930-4698-b94b-883c707d5fb1" (UID: "87589008-b930-4698-b94b-883c707d5fb1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.224988 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87589008-b930-4698-b94b-883c707d5fb1-kube-api-access-85v7k" (OuterVolumeSpecName: "kube-api-access-85v7k") pod "87589008-b930-4698-b94b-883c707d5fb1" (UID: "87589008-b930-4698-b94b-883c707d5fb1"). InnerVolumeSpecName "kube-api-access-85v7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.225057 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffefbab2-8288-4eaa-9df3-e95383cdf19d-kube-api-access-2cmn6" (OuterVolumeSpecName: "kube-api-access-2cmn6") pod "ffefbab2-8288-4eaa-9df3-e95383cdf19d" (UID: "ffefbab2-8288-4eaa-9df3-e95383cdf19d"). InnerVolumeSpecName "kube-api-access-2cmn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.233203 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "87589008-b930-4698-b94b-883c707d5fb1" (UID: "87589008-b930-4698-b94b-883c707d5fb1"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.242687 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-scripts" (OuterVolumeSpecName: "scripts") pod "ffefbab2-8288-4eaa-9df3-e95383cdf19d" (UID: "ffefbab2-8288-4eaa-9df3-e95383cdf19d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.254799 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.257176 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.257192 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.257201 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.273167 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87589008-b930-4698-b94b-883c707d5fb1" (UID: "87589008-b930-4698-b94b-883c707d5fb1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.281096 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.281308 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.281322 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.281332 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.290692 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ffefbab2-8288-4eaa-9df3-e95383cdf19d" (UID: "ffefbab2-8288-4eaa-9df3-e95383cdf19d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.292969 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-config-data" (OuterVolumeSpecName: "config-data") pod "87589008-b930-4698-b94b-883c707d5fb1" (UID: "87589008-b930-4698-b94b-883c707d5fb1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.302276 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.305598 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-config-data" (OuterVolumeSpecName: "config-data") pod "ffefbab2-8288-4eaa-9df3-e95383cdf19d" (UID: "ffefbab2-8288-4eaa-9df3-e95383cdf19d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.314226 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cmn6\" (UniqueName: \"kubernetes.io/projected/ffefbab2-8288-4eaa-9df3-e95383cdf19d-kube-api-access-2cmn6\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.314267 4867 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.314279 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.314287 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.314296 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.314304 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffefbab2-8288-4eaa-9df3-e95383cdf19d-logs\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.314314 4867 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.314323 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffefbab2-8288-4eaa-9df3-e95383cdf19d-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.314330 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.314338 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85v7k\" (UniqueName: \"kubernetes.io/projected/87589008-b930-4698-b94b-883c707d5fb1-kube-api-access-85v7k\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.314346 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87589008-b930-4698-b94b-883c707d5fb1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.354324 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.359626 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.367542 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.372372 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.401517 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gdzwh" event={"ID":"87589008-b930-4698-b94b-883c707d5fb1","Type":"ContainerDied","Data":"884e46e9a9ccb7a1951c05016a7cfe503d95ce144f68a7a413c28878d0db0fb9"} Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.401565 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="884e46e9a9ccb7a1951c05016a7cfe503d95ce144f68a7a413c28878d0db0fb9" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.401711 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gdzwh" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.409310 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-mklx7" event={"ID":"cccb73cc-2b89-4363-b7ca-44dfa627d9f9","Type":"ContainerStarted","Data":"f215c5a914efdb087a943f5dda611b846de12406e04a977d9c6c6acb8ed9e635"} Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.417608 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f83c90-35bd-4d40-90e4-f992c7844a5d","Type":"ContainerStarted","Data":"80b2feaac0df4a17c38e5c52338aa4756e2f98cfb9c0f642287cd39641d2aa47"} Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.421944 4867 generic.go:334] "Generic (PLEG): container finished" podID="5cef8824-386a-4c20-a176-e1964d5307f7" containerID="5a32c2ef4aa73a15ff81381551f9faad42ec662d2800f7b41bd9d12693968e44" exitCode=0 Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.422000 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" event={"ID":"5cef8824-386a-4c20-a176-e1964d5307f7","Type":"ContainerDied","Data":"5a32c2ef4aa73a15ff81381551f9faad42ec662d2800f7b41bd9d12693968e44"} Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.422020 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.422040 4867 scope.go:117] "RemoveContainer" containerID="5a32c2ef4aa73a15ff81381551f9faad42ec662d2800f7b41bd9d12693968e44" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.422026 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-8g8xm" event={"ID":"5cef8824-386a-4c20-a176-e1964d5307f7","Type":"ContainerDied","Data":"a8af3c3243557785237b106c328a49ec8c7419d5a57f62a13b9820888d0db44a"} Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.426244 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-9zrmj" event={"ID":"ffefbab2-8288-4eaa-9df3-e95383cdf19d","Type":"ContainerDied","Data":"b409bcffdfa5ea471959aecebea943d810c68abab172eab94ceaa2964168c2d8"} Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.426303 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b409bcffdfa5ea471959aecebea943d810c68abab172eab94ceaa2964168c2d8" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.429928 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-9zrmj" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.469010 4867 scope.go:117] "RemoveContainer" containerID="89e26c09a3c28860cf0f6c1bbbca98899e7df18ff66c3a51b5fe47e68eaecb95" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.469949 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-mklx7" podStartSLOduration=3.174195966 podStartE2EDuration="43.46992839s" podCreationTimestamp="2026-02-14 04:31:04 +0000 UTC" firstStartedPulling="2026-02-14 04:31:06.598973523 +0000 UTC m=+1298.679910837" lastFinishedPulling="2026-02-14 04:31:46.894705947 +0000 UTC m=+1338.975643261" observedRunningTime="2026-02-14 04:31:47.457627534 +0000 UTC m=+1339.538564848" watchObservedRunningTime="2026-02-14 04:31:47.46992839 +0000 UTC m=+1339.550865704" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.519532 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-ovsdbserver-nb\") pod \"5cef8824-386a-4c20-a176-e1964d5307f7\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.519847 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwnbs\" (UniqueName: \"kubernetes.io/projected/5cef8824-386a-4c20-a176-e1964d5307f7-kube-api-access-wwnbs\") pod \"5cef8824-386a-4c20-a176-e1964d5307f7\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.519930 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-dns-svc\") pod \"5cef8824-386a-4c20-a176-e1964d5307f7\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.520854 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-config\") pod \"5cef8824-386a-4c20-a176-e1964d5307f7\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.520959 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-ovsdbserver-sb\") pod \"5cef8824-386a-4c20-a176-e1964d5307f7\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.521034 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-dns-swift-storage-0\") pod \"5cef8824-386a-4c20-a176-e1964d5307f7\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.529717 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cef8824-386a-4c20-a176-e1964d5307f7-kube-api-access-wwnbs" (OuterVolumeSpecName: "kube-api-access-wwnbs") pod "5cef8824-386a-4c20-a176-e1964d5307f7" (UID: "5cef8824-386a-4c20-a176-e1964d5307f7"). InnerVolumeSpecName "kube-api-access-wwnbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.541006 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-74d7c6cb48-8wr7l"] Feb 14 04:31:47 crc kubenswrapper[4867]: E0214 04:31:47.541457 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87589008-b930-4698-b94b-883c707d5fb1" containerName="keystone-bootstrap" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.541477 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="87589008-b930-4698-b94b-883c707d5fb1" containerName="keystone-bootstrap" Feb 14 04:31:47 crc kubenswrapper[4867]: E0214 04:31:47.541495 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cef8824-386a-4c20-a176-e1964d5307f7" containerName="dnsmasq-dns" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.547584 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cef8824-386a-4c20-a176-e1964d5307f7" containerName="dnsmasq-dns" Feb 14 04:31:47 crc kubenswrapper[4867]: E0214 04:31:47.547643 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cef8824-386a-4c20-a176-e1964d5307f7" containerName="init" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.547653 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cef8824-386a-4c20-a176-e1964d5307f7" containerName="init" Feb 14 04:31:47 crc kubenswrapper[4867]: E0214 04:31:47.547718 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffefbab2-8288-4eaa-9df3-e95383cdf19d" containerName="placement-db-sync" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.547725 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffefbab2-8288-4eaa-9df3-e95383cdf19d" containerName="placement-db-sync" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.548078 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffefbab2-8288-4eaa-9df3-e95383cdf19d" containerName="placement-db-sync" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.548102 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cef8824-386a-4c20-a176-e1964d5307f7" containerName="dnsmasq-dns" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.548113 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="87589008-b930-4698-b94b-883c707d5fb1" containerName="keystone-bootstrap" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.548877 4867 scope.go:117] "RemoveContainer" containerID="5a32c2ef4aa73a15ff81381551f9faad42ec662d2800f7b41bd9d12693968e44" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.549625 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: E0214 04:31:47.559008 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a32c2ef4aa73a15ff81381551f9faad42ec662d2800f7b41bd9d12693968e44\": container with ID starting with 5a32c2ef4aa73a15ff81381551f9faad42ec662d2800f7b41bd9d12693968e44 not found: ID does not exist" containerID="5a32c2ef4aa73a15ff81381551f9faad42ec662d2800f7b41bd9d12693968e44" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.559060 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a32c2ef4aa73a15ff81381551f9faad42ec662d2800f7b41bd9d12693968e44"} err="failed to get container status \"5a32c2ef4aa73a15ff81381551f9faad42ec662d2800f7b41bd9d12693968e44\": rpc error: code = NotFound desc = could not find container \"5a32c2ef4aa73a15ff81381551f9faad42ec662d2800f7b41bd9d12693968e44\": container with ID starting with 5a32c2ef4aa73a15ff81381551f9faad42ec662d2800f7b41bd9d12693968e44 not found: ID does not exist" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.559090 4867 scope.go:117] "RemoveContainer" containerID="89e26c09a3c28860cf0f6c1bbbca98899e7df18ff66c3a51b5fe47e68eaecb95" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.559485 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-jvmrs" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.559922 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.560625 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.560771 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.560636 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 14 04:31:47 crc kubenswrapper[4867]: E0214 04:31:47.561228 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89e26c09a3c28860cf0f6c1bbbca98899e7df18ff66c3a51b5fe47e68eaecb95\": container with ID starting with 89e26c09a3c28860cf0f6c1bbbca98899e7df18ff66c3a51b5fe47e68eaecb95 not found: ID does not exist" containerID="89e26c09a3c28860cf0f6c1bbbca98899e7df18ff66c3a51b5fe47e68eaecb95" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.561325 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89e26c09a3c28860cf0f6c1bbbca98899e7df18ff66c3a51b5fe47e68eaecb95"} err="failed to get container status \"89e26c09a3c28860cf0f6c1bbbca98899e7df18ff66c3a51b5fe47e68eaecb95\": rpc error: code = NotFound desc = could not find container \"89e26c09a3c28860cf0f6c1bbbca98899e7df18ff66c3a51b5fe47e68eaecb95\": container with ID starting with 89e26c09a3c28860cf0f6c1bbbca98899e7df18ff66c3a51b5fe47e68eaecb95 not found: ID does not exist" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.581172 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-74d7c6cb48-8wr7l"] Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.599777 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-config" (OuterVolumeSpecName: "config") pod "5cef8824-386a-4c20-a176-e1964d5307f7" (UID: "5cef8824-386a-4c20-a176-e1964d5307f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.619674 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5cef8824-386a-4c20-a176-e1964d5307f7" (UID: "5cef8824-386a-4c20-a176-e1964d5307f7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.623276 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5cef8824-386a-4c20-a176-e1964d5307f7" (UID: "5cef8824-386a-4c20-a176-e1964d5307f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.623573 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-dns-svc\") pod \"5cef8824-386a-4c20-a176-e1964d5307f7\" (UID: \"5cef8824-386a-4c20-a176-e1964d5307f7\") " Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.623951 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-scripts\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.624002 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-config-data\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.624030 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-public-tls-certs\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.624117 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-internal-tls-certs\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.624341 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzmv2\" (UniqueName: \"kubernetes.io/projected/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-kube-api-access-nzmv2\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.624374 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-combined-ca-bundle\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.624426 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-logs\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: W0214 04:31:47.624487 4867 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/5cef8824-386a-4c20-a176-e1964d5307f7/volumes/kubernetes.io~configmap/dns-svc Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.624521 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5cef8824-386a-4c20-a176-e1964d5307f7" (UID: "5cef8824-386a-4c20-a176-e1964d5307f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.624624 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwnbs\" (UniqueName: \"kubernetes.io/projected/5cef8824-386a-4c20-a176-e1964d5307f7-kube-api-access-wwnbs\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.624643 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.624652 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.624662 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.636275 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5cef8824-386a-4c20-a176-e1964d5307f7" (UID: "5cef8824-386a-4c20-a176-e1964d5307f7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.646361 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5cef8824-386a-4c20-a176-e1964d5307f7" (UID: "5cef8824-386a-4c20-a176-e1964d5307f7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.731114 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-scripts\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.731532 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-config-data\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.731588 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-public-tls-certs\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.731721 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-internal-tls-certs\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.731777 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzmv2\" (UniqueName: \"kubernetes.io/projected/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-kube-api-access-nzmv2\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.731801 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-combined-ca-bundle\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.731844 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-logs\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.731914 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.731926 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5cef8824-386a-4c20-a176-e1964d5307f7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.732268 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-logs\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.737119 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-scripts\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.743347 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-combined-ca-bundle\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.748130 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-public-tls-certs\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.749616 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-config-data\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.750421 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-internal-tls-certs\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.758530 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzmv2\" (UniqueName: \"kubernetes.io/projected/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-kube-api-access-nzmv2\") pod \"placement-74d7c6cb48-8wr7l\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:47 crc kubenswrapper[4867]: I0214 04:31:47.878666 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.069893 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-246z7" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.093911 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-8g8xm"] Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.111066 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-8g8xm"] Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.156616 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18fb2b12-f922-4976-8e05-6e78a8751456-config-data\") pod \"18fb2b12-f922-4976-8e05-6e78a8751456\" (UID: \"18fb2b12-f922-4976-8e05-6e78a8751456\") " Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.156761 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8hsn\" (UniqueName: \"kubernetes.io/projected/18fb2b12-f922-4976-8e05-6e78a8751456-kube-api-access-r8hsn\") pod \"18fb2b12-f922-4976-8e05-6e78a8751456\" (UID: \"18fb2b12-f922-4976-8e05-6e78a8751456\") " Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.156789 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18fb2b12-f922-4976-8e05-6e78a8751456-combined-ca-bundle\") pod \"18fb2b12-f922-4976-8e05-6e78a8751456\" (UID: \"18fb2b12-f922-4976-8e05-6e78a8751456\") " Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.183784 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18fb2b12-f922-4976-8e05-6e78a8751456-kube-api-access-r8hsn" (OuterVolumeSpecName: "kube-api-access-r8hsn") pod "18fb2b12-f922-4976-8e05-6e78a8751456" (UID: "18fb2b12-f922-4976-8e05-6e78a8751456"). InnerVolumeSpecName "kube-api-access-r8hsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.228647 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18fb2b12-f922-4976-8e05-6e78a8751456-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18fb2b12-f922-4976-8e05-6e78a8751456" (UID: "18fb2b12-f922-4976-8e05-6e78a8751456"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.256050 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7595b47f77-vtg9d"] Feb 14 04:31:48 crc kubenswrapper[4867]: E0214 04:31:48.256732 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18fb2b12-f922-4976-8e05-6e78a8751456" containerName="heat-db-sync" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.256760 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="18fb2b12-f922-4976-8e05-6e78a8751456" containerName="heat-db-sync" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.256998 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="18fb2b12-f922-4976-8e05-6e78a8751456" containerName="heat-db-sync" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.257808 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.260720 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.260898 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.261057 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.261223 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.261342 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.261574 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ffvbq" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.263324 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8hsn\" (UniqueName: \"kubernetes.io/projected/18fb2b12-f922-4976-8e05-6e78a8751456-kube-api-access-r8hsn\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.263339 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18fb2b12-f922-4976-8e05-6e78a8751456-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.280314 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7595b47f77-vtg9d"] Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.339132 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18fb2b12-f922-4976-8e05-6e78a8751456-config-data" (OuterVolumeSpecName: "config-data") pod "18fb2b12-f922-4976-8e05-6e78a8751456" (UID: "18fb2b12-f922-4976-8e05-6e78a8751456"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.365130 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-internal-tls-certs\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.365179 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-scripts\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.365203 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-credential-keys\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.365282 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-fernet-keys\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.365363 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-combined-ca-bundle\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.365389 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-config-data\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.365458 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-public-tls-certs\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.365568 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdj5v\" (UniqueName: \"kubernetes.io/projected/1ddcc862-a10c-487c-aaa4-0e93df9c0005-kube-api-access-wdj5v\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.365661 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18fb2b12-f922-4976-8e05-6e78a8751456-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.481048 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-combined-ca-bundle\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.481177 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-config-data\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.481495 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-public-tls-certs\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.481650 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdj5v\" (UniqueName: \"kubernetes.io/projected/1ddcc862-a10c-487c-aaa4-0e93df9c0005-kube-api-access-wdj5v\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.486743 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-internal-tls-certs\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.487004 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-scripts\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.487098 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-credential-keys\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.487193 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-fernet-keys\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.496210 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-scripts\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.502013 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-fernet-keys\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.502188 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-credential-keys\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.504330 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-internal-tls-certs\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.507960 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-public-tls-certs\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.537150 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-config-data\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.538632 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ddcc862-a10c-487c-aaa4-0e93df9c0005-combined-ca-bundle\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.546321 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-74d7c6cb48-8wr7l"] Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.555279 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdj5v\" (UniqueName: \"kubernetes.io/projected/1ddcc862-a10c-487c-aaa4-0e93df9c0005-kube-api-access-wdj5v\") pod \"keystone-7595b47f77-vtg9d\" (UID: \"1ddcc862-a10c-487c-aaa4-0e93df9c0005\") " pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.564306 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-246z7" event={"ID":"18fb2b12-f922-4976-8e05-6e78a8751456","Type":"ContainerDied","Data":"9289cefc22342b7fc66aa673bbc9c4e9b6d16e205beb2daae9082d5d1e900eff"} Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.564359 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9289cefc22342b7fc66aa673bbc9c4e9b6d16e205beb2daae9082d5d1e900eff" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.564455 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-246z7" Feb 14 04:31:48 crc kubenswrapper[4867]: I0214 04:31:48.603637 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:49 crc kubenswrapper[4867]: I0214 04:31:49.062442 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cef8824-386a-4c20-a176-e1964d5307f7" path="/var/lib/kubelet/pods/5cef8824-386a-4c20-a176-e1964d5307f7/volumes" Feb 14 04:31:49 crc kubenswrapper[4867]: I0214 04:31:49.279195 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7595b47f77-vtg9d"] Feb 14 04:31:49 crc kubenswrapper[4867]: I0214 04:31:49.574931 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7595b47f77-vtg9d" event={"ID":"1ddcc862-a10c-487c-aaa4-0e93df9c0005","Type":"ContainerStarted","Data":"284d4ef18c8f33c8c1b929f6ec01157fe34daeca23e98acbb24226cffb045a3a"} Feb 14 04:31:49 crc kubenswrapper[4867]: I0214 04:31:49.577941 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74d7c6cb48-8wr7l" event={"ID":"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2","Type":"ContainerStarted","Data":"e3dbb7ce8b1d62d84a2b156d530b4308c99b32ab7b60ee3156b3ed9b46908218"} Feb 14 04:31:49 crc kubenswrapper[4867]: I0214 04:31:49.577972 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74d7c6cb48-8wr7l" event={"ID":"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2","Type":"ContainerStarted","Data":"d72d747bf641f17caffe57b13805170a59917becd98a04f814a50119c9f846ba"} Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.366414 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8574cd8bdd-r5cv6"] Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.384387 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8574cd8bdd-r5cv6"] Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.384546 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.477911 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef45c32-32a1-4302-84e3-3ff7e864cb99-combined-ca-bundle\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.478012 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6tp6\" (UniqueName: \"kubernetes.io/projected/2ef45c32-32a1-4302-84e3-3ff7e864cb99-kube-api-access-r6tp6\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.478039 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ef45c32-32a1-4302-84e3-3ff7e864cb99-logs\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.478113 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef45c32-32a1-4302-84e3-3ff7e864cb99-config-data\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.478149 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ef45c32-32a1-4302-84e3-3ff7e864cb99-internal-tls-certs\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.478170 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ef45c32-32a1-4302-84e3-3ff7e864cb99-public-tls-certs\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.478188 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ef45c32-32a1-4302-84e3-3ff7e864cb99-scripts\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.580029 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6tp6\" (UniqueName: \"kubernetes.io/projected/2ef45c32-32a1-4302-84e3-3ff7e864cb99-kube-api-access-r6tp6\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.580078 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ef45c32-32a1-4302-84e3-3ff7e864cb99-logs\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.580154 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef45c32-32a1-4302-84e3-3ff7e864cb99-config-data\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.580187 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ef45c32-32a1-4302-84e3-3ff7e864cb99-internal-tls-certs\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.580210 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ef45c32-32a1-4302-84e3-3ff7e864cb99-public-tls-certs\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.580230 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ef45c32-32a1-4302-84e3-3ff7e864cb99-scripts\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.580330 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef45c32-32a1-4302-84e3-3ff7e864cb99-combined-ca-bundle\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.580485 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ef45c32-32a1-4302-84e3-3ff7e864cb99-logs\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.587800 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ef45c32-32a1-4302-84e3-3ff7e864cb99-scripts\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.587990 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef45c32-32a1-4302-84e3-3ff7e864cb99-combined-ca-bundle\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.591890 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ef45c32-32a1-4302-84e3-3ff7e864cb99-internal-tls-certs\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.601486 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef45c32-32a1-4302-84e3-3ff7e864cb99-config-data\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.601782 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ef45c32-32a1-4302-84e3-3ff7e864cb99-public-tls-certs\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.604250 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74d7c6cb48-8wr7l" event={"ID":"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2","Type":"ContainerStarted","Data":"95f9bf20e81b8ee8296887c27b1fc03c7aeba7ab6e8adc89f4de3b967b5b9c86"} Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.604621 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.604677 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.606348 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6tp6\" (UniqueName: \"kubernetes.io/projected/2ef45c32-32a1-4302-84e3-3ff7e864cb99-kube-api-access-r6tp6\") pod \"placement-8574cd8bdd-r5cv6\" (UID: \"2ef45c32-32a1-4302-84e3-3ff7e864cb99\") " pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.610311 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-grkqh" event={"ID":"9c973bde-ff14-4cce-9f9c-57354dbd4adb","Type":"ContainerStarted","Data":"933362dc125c07b501be0afbe062e3a9150917f293f02be88bdfafccd96cea38"} Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.619548 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7595b47f77-vtg9d" event={"ID":"1ddcc862-a10c-487c-aaa4-0e93df9c0005","Type":"ContainerStarted","Data":"31fb0f3c48111438ee031349650a61f4fe5bd218eb1d44f8b161df96998d98a0"} Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.620678 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.631560 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-74d7c6cb48-8wr7l" podStartSLOduration=3.631538048 podStartE2EDuration="3.631538048s" podCreationTimestamp="2026-02-14 04:31:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:31:50.628256331 +0000 UTC m=+1342.709193645" watchObservedRunningTime="2026-02-14 04:31:50.631538048 +0000 UTC m=+1342.712475362" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.657016 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-grkqh" podStartSLOduration=4.335962361 podStartE2EDuration="46.656997433s" podCreationTimestamp="2026-02-14 04:31:04 +0000 UTC" firstStartedPulling="2026-02-14 04:31:06.218338847 +0000 UTC m=+1298.299276161" lastFinishedPulling="2026-02-14 04:31:48.539373929 +0000 UTC m=+1340.620311233" observedRunningTime="2026-02-14 04:31:50.654046184 +0000 UTC m=+1342.734983498" watchObservedRunningTime="2026-02-14 04:31:50.656997433 +0000 UTC m=+1342.737934747" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.680973 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7595b47f77-vtg9d" podStartSLOduration=2.680953287 podStartE2EDuration="2.680953287s" podCreationTimestamp="2026-02-14 04:31:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:31:50.678327188 +0000 UTC m=+1342.759264502" watchObservedRunningTime="2026-02-14 04:31:50.680953287 +0000 UTC m=+1342.761890601" Feb 14 04:31:50 crc kubenswrapper[4867]: I0214 04:31:50.723045 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:51 crc kubenswrapper[4867]: I0214 04:31:51.164297 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 14 04:31:51 crc kubenswrapper[4867]: I0214 04:31:51.168409 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 04:31:51 crc kubenswrapper[4867]: I0214 04:31:51.285938 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8574cd8bdd-r5cv6"] Feb 14 04:31:51 crc kubenswrapper[4867]: I0214 04:31:51.357605 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 14 04:31:51 crc kubenswrapper[4867]: I0214 04:31:51.357762 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 04:31:51 crc kubenswrapper[4867]: I0214 04:31:51.367175 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 14 04:31:51 crc kubenswrapper[4867]: I0214 04:31:51.505762 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 14 04:31:51 crc kubenswrapper[4867]: I0214 04:31:51.650853 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8574cd8bdd-r5cv6" event={"ID":"2ef45c32-32a1-4302-84e3-3ff7e864cb99","Type":"ContainerStarted","Data":"5837652f3241f8ac7f996793c0e77dc2cc0983f1e1cb2f4705eb5aed2bfafc25"} Feb 14 04:31:52 crc kubenswrapper[4867]: I0214 04:31:52.670979 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8574cd8bdd-r5cv6" event={"ID":"2ef45c32-32a1-4302-84e3-3ff7e864cb99","Type":"ContainerStarted","Data":"17f1db18a03838b7c0b891920a932ab3620b823b7bc296bd601248587f10cc95"} Feb 14 04:31:52 crc kubenswrapper[4867]: I0214 04:31:52.671334 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8574cd8bdd-r5cv6" event={"ID":"2ef45c32-32a1-4302-84e3-3ff7e864cb99","Type":"ContainerStarted","Data":"5aad4aff4d66f881a3ea4da12e0740ea7f5d50327c7eaf6d2b1af7ad98769a29"} Feb 14 04:31:52 crc kubenswrapper[4867]: I0214 04:31:52.671353 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:52 crc kubenswrapper[4867]: I0214 04:31:52.704589 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-8574cd8bdd-r5cv6" podStartSLOduration=2.70456468 podStartE2EDuration="2.70456468s" podCreationTimestamp="2026-02-14 04:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:31:52.692400728 +0000 UTC m=+1344.773338062" watchObservedRunningTime="2026-02-14 04:31:52.70456468 +0000 UTC m=+1344.785502004" Feb 14 04:31:53 crc kubenswrapper[4867]: I0214 04:31:53.687257 4867 generic.go:334] "Generic (PLEG): container finished" podID="cccb73cc-2b89-4363-b7ca-44dfa627d9f9" containerID="f215c5a914efdb087a943f5dda611b846de12406e04a977d9c6c6acb8ed9e635" exitCode=0 Feb 14 04:31:53 crc kubenswrapper[4867]: I0214 04:31:53.687400 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-mklx7" event={"ID":"cccb73cc-2b89-4363-b7ca-44dfa627d9f9","Type":"ContainerDied","Data":"f215c5a914efdb087a943f5dda611b846de12406e04a977d9c6c6acb8ed9e635"} Feb 14 04:31:53 crc kubenswrapper[4867]: I0214 04:31:53.687930 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:31:55 crc kubenswrapper[4867]: I0214 04:31:55.712187 4867 generic.go:334] "Generic (PLEG): container finished" podID="9c973bde-ff14-4cce-9f9c-57354dbd4adb" containerID="933362dc125c07b501be0afbe062e3a9150917f293f02be88bdfafccd96cea38" exitCode=0 Feb 14 04:31:55 crc kubenswrapper[4867]: I0214 04:31:55.712275 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-grkqh" event={"ID":"9c973bde-ff14-4cce-9f9c-57354dbd4adb","Type":"ContainerDied","Data":"933362dc125c07b501be0afbe062e3a9150917f293f02be88bdfafccd96cea38"} Feb 14 04:31:57 crc kubenswrapper[4867]: I0214 04:31:57.578104 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-mklx7" Feb 14 04:31:57 crc kubenswrapper[4867]: I0214 04:31:57.689392 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-combined-ca-bundle\") pod \"cccb73cc-2b89-4363-b7ca-44dfa627d9f9\" (UID: \"cccb73cc-2b89-4363-b7ca-44dfa627d9f9\") " Feb 14 04:31:57 crc kubenswrapper[4867]: I0214 04:31:57.689542 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x77fq\" (UniqueName: \"kubernetes.io/projected/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-kube-api-access-x77fq\") pod \"cccb73cc-2b89-4363-b7ca-44dfa627d9f9\" (UID: \"cccb73cc-2b89-4363-b7ca-44dfa627d9f9\") " Feb 14 04:31:57 crc kubenswrapper[4867]: I0214 04:31:57.689569 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-db-sync-config-data\") pod \"cccb73cc-2b89-4363-b7ca-44dfa627d9f9\" (UID: \"cccb73cc-2b89-4363-b7ca-44dfa627d9f9\") " Feb 14 04:31:57 crc kubenswrapper[4867]: I0214 04:31:57.698618 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "cccb73cc-2b89-4363-b7ca-44dfa627d9f9" (UID: "cccb73cc-2b89-4363-b7ca-44dfa627d9f9"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:57 crc kubenswrapper[4867]: I0214 04:31:57.707146 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-kube-api-access-x77fq" (OuterVolumeSpecName: "kube-api-access-x77fq") pod "cccb73cc-2b89-4363-b7ca-44dfa627d9f9" (UID: "cccb73cc-2b89-4363-b7ca-44dfa627d9f9"). InnerVolumeSpecName "kube-api-access-x77fq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:31:57 crc kubenswrapper[4867]: I0214 04:31:57.742459 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cccb73cc-2b89-4363-b7ca-44dfa627d9f9" (UID: "cccb73cc-2b89-4363-b7ca-44dfa627d9f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:57 crc kubenswrapper[4867]: I0214 04:31:57.749585 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-mklx7" event={"ID":"cccb73cc-2b89-4363-b7ca-44dfa627d9f9","Type":"ContainerDied","Data":"f1bbb81d52303ed15cfa9fbfd73e50a998ea92e54eddc8748836c35a398ce9c1"} Feb 14 04:31:57 crc kubenswrapper[4867]: I0214 04:31:57.749635 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1bbb81d52303ed15cfa9fbfd73e50a998ea92e54eddc8748836c35a398ce9c1" Feb 14 04:31:57 crc kubenswrapper[4867]: I0214 04:31:57.749704 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-mklx7" Feb 14 04:31:57 crc kubenswrapper[4867]: I0214 04:31:57.793371 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:57 crc kubenswrapper[4867]: I0214 04:31:57.793770 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x77fq\" (UniqueName: \"kubernetes.io/projected/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-kube-api-access-x77fq\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:57 crc kubenswrapper[4867]: I0214 04:31:57.793782 4867 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cccb73cc-2b89-4363-b7ca-44dfa627d9f9-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.084609 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.209868 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87zjm\" (UniqueName: \"kubernetes.io/projected/9c973bde-ff14-4cce-9f9c-57354dbd4adb-kube-api-access-87zjm\") pod \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.210012 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-scripts\") pod \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.210097 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9c973bde-ff14-4cce-9f9c-57354dbd4adb-etc-machine-id\") pod \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.210171 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-combined-ca-bundle\") pod \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.210196 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-config-data\") pod \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.210324 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-db-sync-config-data\") pod \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\" (UID: \"9c973bde-ff14-4cce-9f9c-57354dbd4adb\") " Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.210650 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c973bde-ff14-4cce-9f9c-57354dbd4adb-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9c973bde-ff14-4cce-9f9c-57354dbd4adb" (UID: "9c973bde-ff14-4cce-9f9c-57354dbd4adb"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.211957 4867 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9c973bde-ff14-4cce-9f9c-57354dbd4adb-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.214077 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c973bde-ff14-4cce-9f9c-57354dbd4adb-kube-api-access-87zjm" (OuterVolumeSpecName: "kube-api-access-87zjm") pod "9c973bde-ff14-4cce-9f9c-57354dbd4adb" (UID: "9c973bde-ff14-4cce-9f9c-57354dbd4adb"). InnerVolumeSpecName "kube-api-access-87zjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.214931 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-scripts" (OuterVolumeSpecName: "scripts") pod "9c973bde-ff14-4cce-9f9c-57354dbd4adb" (UID: "9c973bde-ff14-4cce-9f9c-57354dbd4adb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.214961 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9c973bde-ff14-4cce-9f9c-57354dbd4adb" (UID: "9c973bde-ff14-4cce-9f9c-57354dbd4adb"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.238760 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c973bde-ff14-4cce-9f9c-57354dbd4adb" (UID: "9c973bde-ff14-4cce-9f9c-57354dbd4adb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.275071 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-config-data" (OuterVolumeSpecName: "config-data") pod "9c973bde-ff14-4cce-9f9c-57354dbd4adb" (UID: "9c973bde-ff14-4cce-9f9c-57354dbd4adb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.313909 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.314167 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.314231 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.314289 4867 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9c973bde-ff14-4cce-9f9c-57354dbd4adb-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.314410 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87zjm\" (UniqueName: \"kubernetes.io/projected/9c973bde-ff14-4cce-9f9c-57354dbd4adb-kube-api-access-87zjm\") on node \"crc\" DevicePath \"\"" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.764228 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerName="ceilometer-central-agent" containerID="cri-o://12c007eaf3f2f0273b4b97ee67fcb41bee882cea55e4b7022e88e2bd510463b3" gracePeriod=30 Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.763975 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f83c90-35bd-4d40-90e4-f992c7844a5d","Type":"ContainerStarted","Data":"c3cf8cc9c9af14899e3e42c8a5806f199da51be9cd935b737e6e52767602944f"} Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.764751 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.764388 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerName="ceilometer-notification-agent" containerID="cri-o://5aef47de2b98909844392965ecce12a94c4a0b4e3f7b14facabcf28be59312be" gracePeriod=30 Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.764349 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerName="sg-core" containerID="cri-o://80b2feaac0df4a17c38e5c52338aa4756e2f98cfb9c0f642287cd39641d2aa47" gracePeriod=30 Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.764349 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerName="proxy-httpd" containerID="cri-o://c3cf8cc9c9af14899e3e42c8a5806f199da51be9cd935b737e6e52767602944f" gracePeriod=30 Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.769198 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-grkqh" event={"ID":"9c973bde-ff14-4cce-9f9c-57354dbd4adb","Type":"ContainerDied","Data":"b3a7579e2ea00af7974e6f233c7249ba1f5d8c4ed824a86714e0fb4c62e7eb90"} Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.769244 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3a7579e2ea00af7974e6f233c7249ba1f5d8c4ed824a86714e0fb4c62e7eb90" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.769354 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-grkqh" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.836121 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.9986397030000003 podStartE2EDuration="54.836096589s" podCreationTimestamp="2026-02-14 04:31:04 +0000 UTC" firstStartedPulling="2026-02-14 04:31:07.256201489 +0000 UTC m=+1299.337138803" lastFinishedPulling="2026-02-14 04:31:58.093658375 +0000 UTC m=+1350.174595689" observedRunningTime="2026-02-14 04:31:58.810405478 +0000 UTC m=+1350.891342792" watchObservedRunningTime="2026-02-14 04:31:58.836096589 +0000 UTC m=+1350.917033903" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.865070 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6cb8d59db5-hc7rx"] Feb 14 04:31:58 crc kubenswrapper[4867]: E0214 04:31:58.865544 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cccb73cc-2b89-4363-b7ca-44dfa627d9f9" containerName="barbican-db-sync" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.865556 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="cccb73cc-2b89-4363-b7ca-44dfa627d9f9" containerName="barbican-db-sync" Feb 14 04:31:58 crc kubenswrapper[4867]: E0214 04:31:58.865598 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c973bde-ff14-4cce-9f9c-57354dbd4adb" containerName="cinder-db-sync" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.865605 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c973bde-ff14-4cce-9f9c-57354dbd4adb" containerName="cinder-db-sync" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.865790 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c973bde-ff14-4cce-9f9c-57354dbd4adb" containerName="cinder-db-sync" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.865806 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="cccb73cc-2b89-4363-b7ca-44dfa627d9f9" containerName="barbican-db-sync" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.866976 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.869472 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-p86vr" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.869877 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.879090 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.907159 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6cb8d59db5-hc7rx"] Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.929752 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvm7w\" (UniqueName: \"kubernetes.io/projected/6517b483-cb9c-465e-a7f0-f697b6ba3189-kube-api-access-xvm7w\") pod \"barbican-worker-6cb8d59db5-hc7rx\" (UID: \"6517b483-cb9c-465e-a7f0-f697b6ba3189\") " pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.929902 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6517b483-cb9c-465e-a7f0-f697b6ba3189-logs\") pod \"barbican-worker-6cb8d59db5-hc7rx\" (UID: \"6517b483-cb9c-465e-a7f0-f697b6ba3189\") " pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.930018 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6517b483-cb9c-465e-a7f0-f697b6ba3189-config-data-custom\") pod \"barbican-worker-6cb8d59db5-hc7rx\" (UID: \"6517b483-cb9c-465e-a7f0-f697b6ba3189\") " pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.930051 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6517b483-cb9c-465e-a7f0-f697b6ba3189-config-data\") pod \"barbican-worker-6cb8d59db5-hc7rx\" (UID: \"6517b483-cb9c-465e-a7f0-f697b6ba3189\") " pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.930097 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6517b483-cb9c-465e-a7f0-f697b6ba3189-combined-ca-bundle\") pod \"barbican-worker-6cb8d59db5-hc7rx\" (UID: \"6517b483-cb9c-465e-a7f0-f697b6ba3189\") " pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.968372 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7f6876db8-kxmgv"] Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.970736 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:58 crc kubenswrapper[4867]: I0214 04:31:58.976789 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.031666 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvm7w\" (UniqueName: \"kubernetes.io/projected/6517b483-cb9c-465e-a7f0-f697b6ba3189-kube-api-access-xvm7w\") pod \"barbican-worker-6cb8d59db5-hc7rx\" (UID: \"6517b483-cb9c-465e-a7f0-f697b6ba3189\") " pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.031741 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a4a3883-6484-4af9-a7f0-8dd5ee4da247-config-data-custom\") pod \"barbican-keystone-listener-7f6876db8-kxmgv\" (UID: \"4a4a3883-6484-4af9-a7f0-8dd5ee4da247\") " pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.031809 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6517b483-cb9c-465e-a7f0-f697b6ba3189-logs\") pod \"barbican-worker-6cb8d59db5-hc7rx\" (UID: \"6517b483-cb9c-465e-a7f0-f697b6ba3189\") " pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.031832 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a4a3883-6484-4af9-a7f0-8dd5ee4da247-config-data\") pod \"barbican-keystone-listener-7f6876db8-kxmgv\" (UID: \"4a4a3883-6484-4af9-a7f0-8dd5ee4da247\") " pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.031895 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6517b483-cb9c-465e-a7f0-f697b6ba3189-config-data-custom\") pod \"barbican-worker-6cb8d59db5-hc7rx\" (UID: \"6517b483-cb9c-465e-a7f0-f697b6ba3189\") " pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.031915 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6517b483-cb9c-465e-a7f0-f697b6ba3189-config-data\") pod \"barbican-worker-6cb8d59db5-hc7rx\" (UID: \"6517b483-cb9c-465e-a7f0-f697b6ba3189\") " pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.031946 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a4a3883-6484-4af9-a7f0-8dd5ee4da247-logs\") pod \"barbican-keystone-listener-7f6876db8-kxmgv\" (UID: \"4a4a3883-6484-4af9-a7f0-8dd5ee4da247\") " pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.031965 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6517b483-cb9c-465e-a7f0-f697b6ba3189-combined-ca-bundle\") pod \"barbican-worker-6cb8d59db5-hc7rx\" (UID: \"6517b483-cb9c-465e-a7f0-f697b6ba3189\") " pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.031998 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjcwg\" (UniqueName: \"kubernetes.io/projected/4a4a3883-6484-4af9-a7f0-8dd5ee4da247-kube-api-access-rjcwg\") pod \"barbican-keystone-listener-7f6876db8-kxmgv\" (UID: \"4a4a3883-6484-4af9-a7f0-8dd5ee4da247\") " pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.032035 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a4a3883-6484-4af9-a7f0-8dd5ee4da247-combined-ca-bundle\") pod \"barbican-keystone-listener-7f6876db8-kxmgv\" (UID: \"4a4a3883-6484-4af9-a7f0-8dd5ee4da247\") " pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.034446 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6517b483-cb9c-465e-a7f0-f697b6ba3189-logs\") pod \"barbican-worker-6cb8d59db5-hc7rx\" (UID: \"6517b483-cb9c-465e-a7f0-f697b6ba3189\") " pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.041718 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6517b483-cb9c-465e-a7f0-f697b6ba3189-combined-ca-bundle\") pod \"barbican-worker-6cb8d59db5-hc7rx\" (UID: \"6517b483-cb9c-465e-a7f0-f697b6ba3189\") " pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.044125 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7f6876db8-kxmgv"] Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.044162 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-rh624"] Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.045873 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.047674 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6517b483-cb9c-465e-a7f0-f697b6ba3189-config-data\") pod \"barbican-worker-6cb8d59db5-hc7rx\" (UID: \"6517b483-cb9c-465e-a7f0-f697b6ba3189\") " pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.063229 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvm7w\" (UniqueName: \"kubernetes.io/projected/6517b483-cb9c-465e-a7f0-f697b6ba3189-kube-api-access-xvm7w\") pod \"barbican-worker-6cb8d59db5-hc7rx\" (UID: \"6517b483-cb9c-465e-a7f0-f697b6ba3189\") " pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.066127 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6517b483-cb9c-465e-a7f0-f697b6ba3189-config-data-custom\") pod \"barbican-worker-6cb8d59db5-hc7rx\" (UID: \"6517b483-cb9c-465e-a7f0-f697b6ba3189\") " pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.069030 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-rh624"] Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.130981 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-78546bb898-l5722"] Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.135098 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.136008 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjcwg\" (UniqueName: \"kubernetes.io/projected/4a4a3883-6484-4af9-a7f0-8dd5ee4da247-kube-api-access-rjcwg\") pod \"barbican-keystone-listener-7f6876db8-kxmgv\" (UID: \"4a4a3883-6484-4af9-a7f0-8dd5ee4da247\") " pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.136070 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.136105 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a4a3883-6484-4af9-a7f0-8dd5ee4da247-combined-ca-bundle\") pod \"barbican-keystone-listener-7f6876db8-kxmgv\" (UID: \"4a4a3883-6484-4af9-a7f0-8dd5ee4da247\") " pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.136177 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-config\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.136199 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a4a3883-6484-4af9-a7f0-8dd5ee4da247-config-data-custom\") pod \"barbican-keystone-listener-7f6876db8-kxmgv\" (UID: \"4a4a3883-6484-4af9-a7f0-8dd5ee4da247\") " pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.136258 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sftc\" (UniqueName: \"kubernetes.io/projected/ead79748-92fd-4acc-9abb-e5d73a7be7da-kube-api-access-7sftc\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.136287 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a4a3883-6484-4af9-a7f0-8dd5ee4da247-config-data\") pod \"barbican-keystone-listener-7f6876db8-kxmgv\" (UID: \"4a4a3883-6484-4af9-a7f0-8dd5ee4da247\") " pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.136322 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.136416 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.136451 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a4a3883-6484-4af9-a7f0-8dd5ee4da247-logs\") pod \"barbican-keystone-listener-7f6876db8-kxmgv\" (UID: \"4a4a3883-6484-4af9-a7f0-8dd5ee4da247\") " pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.136499 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-dns-svc\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.137301 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a4a3883-6484-4af9-a7f0-8dd5ee4da247-logs\") pod \"barbican-keystone-listener-7f6876db8-kxmgv\" (UID: \"4a4a3883-6484-4af9-a7f0-8dd5ee4da247\") " pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.140852 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a4a3883-6484-4af9-a7f0-8dd5ee4da247-config-data\") pod \"barbican-keystone-listener-7f6876db8-kxmgv\" (UID: \"4a4a3883-6484-4af9-a7f0-8dd5ee4da247\") " pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.141868 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.142346 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a4a3883-6484-4af9-a7f0-8dd5ee4da247-combined-ca-bundle\") pod \"barbican-keystone-listener-7f6876db8-kxmgv\" (UID: \"4a4a3883-6484-4af9-a7f0-8dd5ee4da247\") " pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.151023 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4a4a3883-6484-4af9-a7f0-8dd5ee4da247-config-data-custom\") pod \"barbican-keystone-listener-7f6876db8-kxmgv\" (UID: \"4a4a3883-6484-4af9-a7f0-8dd5ee4da247\") " pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.193178 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjcwg\" (UniqueName: \"kubernetes.io/projected/4a4a3883-6484-4af9-a7f0-8dd5ee4da247-kube-api-access-rjcwg\") pod \"barbican-keystone-listener-7f6876db8-kxmgv\" (UID: \"4a4a3883-6484-4af9-a7f0-8dd5ee4da247\") " pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.240637 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-config\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.240728 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-config-data\") pod \"barbican-api-78546bb898-l5722\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.240787 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bf24394-6465-476f-a99e-f46fce318656-logs\") pod \"barbican-api-78546bb898-l5722\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.240817 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sftc\" (UniqueName: \"kubernetes.io/projected/ead79748-92fd-4acc-9abb-e5d73a7be7da-kube-api-access-7sftc\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.240857 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.240895 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-combined-ca-bundle\") pod \"barbican-api-78546bb898-l5722\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.240926 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-config-data-custom\") pod \"barbican-api-78546bb898-l5722\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.240952 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bvxz\" (UniqueName: \"kubernetes.io/projected/3bf24394-6465-476f-a99e-f46fce318656-kube-api-access-2bvxz\") pod \"barbican-api-78546bb898-l5722\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.241004 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.241038 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-dns-svc\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.241079 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.241706 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-config\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.254871 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-78546bb898-l5722"] Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.260000 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.263772 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.265478 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.266533 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-dns-svc\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.273884 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6cb8d59db5-hc7rx" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.274895 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sftc\" (UniqueName: \"kubernetes.io/projected/ead79748-92fd-4acc-9abb-e5d73a7be7da-kube-api-access-7sftc\") pod \"dnsmasq-dns-85ff748b95-rh624\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.342841 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.344433 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-combined-ca-bundle\") pod \"barbican-api-78546bb898-l5722\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.344483 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-config-data-custom\") pod \"barbican-api-78546bb898-l5722\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.344523 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bvxz\" (UniqueName: \"kubernetes.io/projected/3bf24394-6465-476f-a99e-f46fce318656-kube-api-access-2bvxz\") pod \"barbican-api-78546bb898-l5722\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.344686 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-config-data\") pod \"barbican-api-78546bb898-l5722\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.344721 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bf24394-6465-476f-a99e-f46fce318656-logs\") pod \"barbican-api-78546bb898-l5722\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.346699 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bf24394-6465-476f-a99e-f46fce318656-logs\") pod \"barbican-api-78546bb898-l5722\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.348805 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-combined-ca-bundle\") pod \"barbican-api-78546bb898-l5722\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.349929 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-config-data-custom\") pod \"barbican-api-78546bb898-l5722\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.352630 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-config-data\") pod \"barbican-api-78546bb898-l5722\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.398169 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bvxz\" (UniqueName: \"kubernetes.io/projected/3bf24394-6465-476f-a99e-f46fce318656-kube-api-access-2bvxz\") pod \"barbican-api-78546bb898-l5722\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.404417 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.424451 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.430182 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.430384 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.430490 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.430597 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-76c2m" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.435519 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.474161 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-rh624"] Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.475102 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.553009 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-config-data\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.553089 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6c55469-3aa2-4471-932a-442ce56570a7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.553122 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-scripts\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.553199 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv2hq\" (UniqueName: \"kubernetes.io/projected/b6c55469-3aa2-4471-932a-442ce56570a7-kube-api-access-kv2hq\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.553262 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.553294 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.554491 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-pq99b"] Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.556746 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.599994 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-pq99b"] Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.643404 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.657336 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-config-data\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.657404 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6c55469-3aa2-4471-932a-442ce56570a7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.657437 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-scripts\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.657517 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv2hq\" (UniqueName: \"kubernetes.io/projected/b6c55469-3aa2-4471-932a-442ce56570a7-kube-api-access-kv2hq\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.657569 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.657595 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.666996 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-config-data\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.667080 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.669601 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6c55469-3aa2-4471-932a-442ce56570a7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.682706 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.685895 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.688234 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.689292 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.692538 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.707178 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-scripts\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.712453 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv2hq\" (UniqueName: \"kubernetes.io/projected/b6c55469-3aa2-4471-932a-442ce56570a7-kube-api-access-kv2hq\") pod \"cinder-scheduler-0\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.764050 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.764098 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.764136 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-config\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.764207 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.764234 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.764274 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8rdd\" (UniqueName: \"kubernetes.io/projected/746b9097-84d0-4d00-a92c-808df9206d8a-kube-api-access-j8rdd\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.828015 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.857229 4867 generic.go:334] "Generic (PLEG): container finished" podID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerID="c3cf8cc9c9af14899e3e42c8a5806f199da51be9cd935b737e6e52767602944f" exitCode=0 Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.857259 4867 generic.go:334] "Generic (PLEG): container finished" podID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerID="80b2feaac0df4a17c38e5c52338aa4756e2f98cfb9c0f642287cd39641d2aa47" exitCode=2 Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.857281 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f83c90-35bd-4d40-90e4-f992c7844a5d","Type":"ContainerDied","Data":"c3cf8cc9c9af14899e3e42c8a5806f199da51be9cd935b737e6e52767602944f"} Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.857308 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f83c90-35bd-4d40-90e4-f992c7844a5d","Type":"ContainerDied","Data":"80b2feaac0df4a17c38e5c52338aa4756e2f98cfb9c0f642287cd39641d2aa47"} Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.866291 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/defe0915-1f3e-4357-ba66-529a3801b279-etc-machine-id\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.866344 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.866390 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.866442 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8rdd\" (UniqueName: \"kubernetes.io/projected/746b9097-84d0-4d00-a92c-808df9206d8a-kube-api-access-j8rdd\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.866483 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-config-data\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.866614 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-config-data-custom\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.866635 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/defe0915-1f3e-4357-ba66-529a3801b279-logs\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.866663 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-scripts\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.866680 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.866697 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.866725 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.866758 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-config\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.866801 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhr5h\" (UniqueName: \"kubernetes.io/projected/defe0915-1f3e-4357-ba66-529a3801b279-kube-api-access-dhr5h\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.867836 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.868385 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.869194 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.870384 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-config\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.871127 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.897121 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8rdd\" (UniqueName: \"kubernetes.io/projected/746b9097-84d0-4d00-a92c-808df9206d8a-kube-api-access-j8rdd\") pod \"dnsmasq-dns-5c9776ccc5-pq99b\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.973469 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-scripts\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.973530 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.973743 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhr5h\" (UniqueName: \"kubernetes.io/projected/defe0915-1f3e-4357-ba66-529a3801b279-kube-api-access-dhr5h\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.973859 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/defe0915-1f3e-4357-ba66-529a3801b279-etc-machine-id\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.974076 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-config-data\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.974342 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-config-data-custom\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.974372 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/defe0915-1f3e-4357-ba66-529a3801b279-logs\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.975204 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/defe0915-1f3e-4357-ba66-529a3801b279-etc-machine-id\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.975966 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/defe0915-1f3e-4357-ba66-529a3801b279-logs\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.977813 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-scripts\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.979464 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-config-data\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.987422 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:31:59 crc kubenswrapper[4867]: I0214 04:31:59.991423 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-config-data-custom\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:32:00 crc kubenswrapper[4867]: I0214 04:32:00.003957 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhr5h\" (UniqueName: \"kubernetes.io/projected/defe0915-1f3e-4357-ba66-529a3801b279-kube-api-access-dhr5h\") pod \"cinder-api-0\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " pod="openstack/cinder-api-0" Feb 14 04:32:00 crc kubenswrapper[4867]: I0214 04:32:00.117079 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:32:00 crc kubenswrapper[4867]: I0214 04:32:00.131848 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 14 04:32:00 crc kubenswrapper[4867]: I0214 04:32:00.159275 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7f6876db8-kxmgv"] Feb 14 04:32:00 crc kubenswrapper[4867]: I0214 04:32:00.188233 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6cb8d59db5-hc7rx"] Feb 14 04:32:00 crc kubenswrapper[4867]: I0214 04:32:00.319279 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-rh624"] Feb 14 04:32:00 crc kubenswrapper[4867]: I0214 04:32:00.413394 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-78546bb898-l5722"] Feb 14 04:32:00 crc kubenswrapper[4867]: W0214 04:32:00.419166 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bf24394_6465_476f_a99e_f46fce318656.slice/crio-8d85459a09b7155a3e119769eaeb23dbfd9aa893f907e0c55fc24cbd558bf78f WatchSource:0}: Error finding container 8d85459a09b7155a3e119769eaeb23dbfd9aa893f907e0c55fc24cbd558bf78f: Status 404 returned error can't find the container with id 8d85459a09b7155a3e119769eaeb23dbfd9aa893f907e0c55fc24cbd558bf78f Feb 14 04:32:00 crc kubenswrapper[4867]: I0214 04:32:00.525033 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 04:32:00 crc kubenswrapper[4867]: I0214 04:32:00.793749 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 14 04:32:00 crc kubenswrapper[4867]: I0214 04:32:00.911265 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78546bb898-l5722" event={"ID":"3bf24394-6465-476f-a99e-f46fce318656","Type":"ContainerStarted","Data":"8d85459a09b7155a3e119769eaeb23dbfd9aa893f907e0c55fc24cbd558bf78f"} Feb 14 04:32:00 crc kubenswrapper[4867]: I0214 04:32:00.912910 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b6c55469-3aa2-4471-932a-442ce56570a7","Type":"ContainerStarted","Data":"3e15ae2331b94d3c6d65cab2376b0b1e088c96cfaa63266969feb367a3f3d213"} Feb 14 04:32:00 crc kubenswrapper[4867]: I0214 04:32:00.913792 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" event={"ID":"4a4a3883-6484-4af9-a7f0-8dd5ee4da247","Type":"ContainerStarted","Data":"bab0243e2f30ade2fbf7d69ff5be791722a012a5265d850b903dfa45eb14c8cb"} Feb 14 04:32:00 crc kubenswrapper[4867]: I0214 04:32:00.933073 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-rh624" event={"ID":"ead79748-92fd-4acc-9abb-e5d73a7be7da","Type":"ContainerStarted","Data":"09deda04b6ec52201b019321aa75e2ff7261072711b3d07a6ec6b5a3d2007260"} Feb 14 04:32:00 crc kubenswrapper[4867]: I0214 04:32:00.939149 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6cb8d59db5-hc7rx" event={"ID":"6517b483-cb9c-465e-a7f0-f697b6ba3189","Type":"ContainerStarted","Data":"5019e7ff2e1fa2506cd7b4669b444efae61a9516b4d15aaa76c5f39c261cc2e8"} Feb 14 04:32:00 crc kubenswrapper[4867]: I0214 04:32:00.944439 4867 generic.go:334] "Generic (PLEG): container finished" podID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerID="12c007eaf3f2f0273b4b97ee67fcb41bee882cea55e4b7022e88e2bd510463b3" exitCode=0 Feb 14 04:32:00 crc kubenswrapper[4867]: I0214 04:32:00.944478 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f83c90-35bd-4d40-90e4-f992c7844a5d","Type":"ContainerDied","Data":"12c007eaf3f2f0273b4b97ee67fcb41bee882cea55e4b7022e88e2bd510463b3"} Feb 14 04:32:01 crc kubenswrapper[4867]: W0214 04:32:01.073959 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod746b9097_84d0_4d00_a92c_808df9206d8a.slice/crio-9ac4c13dc3497256b1b6cb1aa9076b705851041e05e9b02af05f329d0735ed8b WatchSource:0}: Error finding container 9ac4c13dc3497256b1b6cb1aa9076b705851041e05e9b02af05f329d0735ed8b: Status 404 returned error can't find the container with id 9ac4c13dc3497256b1b6cb1aa9076b705851041e05e9b02af05f329d0735ed8b Feb 14 04:32:01 crc kubenswrapper[4867]: I0214 04:32:01.109301 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-pq99b"] Feb 14 04:32:01 crc kubenswrapper[4867]: I0214 04:32:01.265423 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:32:01 crc kubenswrapper[4867]: I0214 04:32:01.265486 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:32:01 crc kubenswrapper[4867]: I0214 04:32:01.411270 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 14 04:32:01 crc kubenswrapper[4867]: I0214 04:32:01.982538 4867 generic.go:334] "Generic (PLEG): container finished" podID="ead79748-92fd-4acc-9abb-e5d73a7be7da" containerID="8347aa29efdc5405a84ddb4018ebd17d5d842526f15a338717ac351c9e5c192b" exitCode=0 Feb 14 04:32:01 crc kubenswrapper[4867]: I0214 04:32:01.982711 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-rh624" event={"ID":"ead79748-92fd-4acc-9abb-e5d73a7be7da","Type":"ContainerDied","Data":"8347aa29efdc5405a84ddb4018ebd17d5d842526f15a338717ac351c9e5c192b"} Feb 14 04:32:01 crc kubenswrapper[4867]: I0214 04:32:01.992998 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78546bb898-l5722" event={"ID":"3bf24394-6465-476f-a99e-f46fce318656","Type":"ContainerStarted","Data":"d7acae34b523e3a580609072a0335d9f4dc1a0643b2d2946b03ae70287735d81"} Feb 14 04:32:01 crc kubenswrapper[4867]: I0214 04:32:01.993076 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78546bb898-l5722" event={"ID":"3bf24394-6465-476f-a99e-f46fce318656","Type":"ContainerStarted","Data":"3195bbd4ee7008fc50e7835b398535783b87d1f4092164f29b60b4bdc5b3c456"} Feb 14 04:32:01 crc kubenswrapper[4867]: I0214 04:32:01.993204 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:32:01 crc kubenswrapper[4867]: I0214 04:32:01.996379 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"defe0915-1f3e-4357-ba66-529a3801b279","Type":"ContainerStarted","Data":"840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24"} Feb 14 04:32:01 crc kubenswrapper[4867]: I0214 04:32:01.996424 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"defe0915-1f3e-4357-ba66-529a3801b279","Type":"ContainerStarted","Data":"b7df10ba039fcee4e4b9fcdc56451b0c829f6865f31aa92d7e75f9d0c4ffbbef"} Feb 14 04:32:01 crc kubenswrapper[4867]: I0214 04:32:01.999372 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" event={"ID":"746b9097-84d0-4d00-a92c-808df9206d8a","Type":"ContainerStarted","Data":"9ac4c13dc3497256b1b6cb1aa9076b705851041e05e9b02af05f329d0735ed8b"} Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.053176 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-78546bb898-l5722" podStartSLOduration=3.053150257 podStartE2EDuration="3.053150257s" podCreationTimestamp="2026-02-14 04:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:02.036967068 +0000 UTC m=+1354.117904382" watchObservedRunningTime="2026-02-14 04:32:02.053150257 +0000 UTC m=+1354.134087571" Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.626668 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.751687 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-ovsdbserver-nb\") pod \"ead79748-92fd-4acc-9abb-e5d73a7be7da\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.751754 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-config\") pod \"ead79748-92fd-4acc-9abb-e5d73a7be7da\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.751887 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-dns-swift-storage-0\") pod \"ead79748-92fd-4acc-9abb-e5d73a7be7da\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.751963 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-ovsdbserver-sb\") pod \"ead79748-92fd-4acc-9abb-e5d73a7be7da\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.752002 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-dns-svc\") pod \"ead79748-92fd-4acc-9abb-e5d73a7be7da\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.752066 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sftc\" (UniqueName: \"kubernetes.io/projected/ead79748-92fd-4acc-9abb-e5d73a7be7da-kube-api-access-7sftc\") pod \"ead79748-92fd-4acc-9abb-e5d73a7be7da\" (UID: \"ead79748-92fd-4acc-9abb-e5d73a7be7da\") " Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.766961 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ead79748-92fd-4acc-9abb-e5d73a7be7da-kube-api-access-7sftc" (OuterVolumeSpecName: "kube-api-access-7sftc") pod "ead79748-92fd-4acc-9abb-e5d73a7be7da" (UID: "ead79748-92fd-4acc-9abb-e5d73a7be7da"). InnerVolumeSpecName "kube-api-access-7sftc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.793235 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ead79748-92fd-4acc-9abb-e5d73a7be7da" (UID: "ead79748-92fd-4acc-9abb-e5d73a7be7da"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.795170 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ead79748-92fd-4acc-9abb-e5d73a7be7da" (UID: "ead79748-92fd-4acc-9abb-e5d73a7be7da"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.799115 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ead79748-92fd-4acc-9abb-e5d73a7be7da" (UID: "ead79748-92fd-4acc-9abb-e5d73a7be7da"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.799630 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ead79748-92fd-4acc-9abb-e5d73a7be7da" (UID: "ead79748-92fd-4acc-9abb-e5d73a7be7da"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.814488 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-config" (OuterVolumeSpecName: "config") pod "ead79748-92fd-4acc-9abb-e5d73a7be7da" (UID: "ead79748-92fd-4acc-9abb-e5d73a7be7da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.888066 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.888359 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.888427 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.888499 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.888587 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ead79748-92fd-4acc-9abb-e5d73a7be7da-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:02 crc kubenswrapper[4867]: I0214 04:32:02.888661 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sftc\" (UniqueName: \"kubernetes.io/projected/ead79748-92fd-4acc-9abb-e5d73a7be7da-kube-api-access-7sftc\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:03 crc kubenswrapper[4867]: I0214 04:32:03.021489 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6cb8d59db5-hc7rx" event={"ID":"6517b483-cb9c-465e-a7f0-f697b6ba3189","Type":"ContainerStarted","Data":"e655fb0709e8bb6c8faaedd3620b602dde2f71f117bc0a4ce2f2db694fa65dcc"} Feb 14 04:32:03 crc kubenswrapper[4867]: I0214 04:32:03.029609 4867 generic.go:334] "Generic (PLEG): container finished" podID="746b9097-84d0-4d00-a92c-808df9206d8a" containerID="5b62b9c4c18730bd95cd769f97a701a763d73eb11bbefea4c9a65847618af00e" exitCode=0 Feb 14 04:32:03 crc kubenswrapper[4867]: I0214 04:32:03.029739 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" event={"ID":"746b9097-84d0-4d00-a92c-808df9206d8a","Type":"ContainerDied","Data":"5b62b9c4c18730bd95cd769f97a701a763d73eb11bbefea4c9a65847618af00e"} Feb 14 04:32:03 crc kubenswrapper[4867]: I0214 04:32:03.034569 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" event={"ID":"4a4a3883-6484-4af9-a7f0-8dd5ee4da247","Type":"ContainerStarted","Data":"f4ad14ad915c712ba0f4f33465067e05c07cc0afb594e4331d69be3ed95dd3cd"} Feb 14 04:32:03 crc kubenswrapper[4867]: I0214 04:32:03.053079 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-rh624" Feb 14 04:32:03 crc kubenswrapper[4867]: I0214 04:32:03.054258 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-rh624" event={"ID":"ead79748-92fd-4acc-9abb-e5d73a7be7da","Type":"ContainerDied","Data":"09deda04b6ec52201b019321aa75e2ff7261072711b3d07a6ec6b5a3d2007260"} Feb 14 04:32:03 crc kubenswrapper[4867]: I0214 04:32:03.054436 4867 scope.go:117] "RemoveContainer" containerID="8347aa29efdc5405a84ddb4018ebd17d5d842526f15a338717ac351c9e5c192b" Feb 14 04:32:03 crc kubenswrapper[4867]: I0214 04:32:03.055194 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:32:03 crc kubenswrapper[4867]: I0214 04:32:03.192440 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-rh624"] Feb 14 04:32:03 crc kubenswrapper[4867]: I0214 04:32:03.230591 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-rh624"] Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.099915 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" event={"ID":"4a4a3883-6484-4af9-a7f0-8dd5ee4da247","Type":"ContainerStarted","Data":"47d54b8f3d1cbe2df657b8c4ef5ec2454d923a5ded983165ad2ca683545e743a"} Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.127862 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6cb8d59db5-hc7rx" event={"ID":"6517b483-cb9c-465e-a7f0-f697b6ba3189","Type":"ContainerStarted","Data":"a1e0acea8b8254a02fd035c490fd90428229ffa5a7fbe5002fd7d9df1e79a22d"} Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.164766 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6cb8d59db5-hc7rx" podStartSLOduration=3.7944623 podStartE2EDuration="6.16474163s" podCreationTimestamp="2026-02-14 04:31:58 +0000 UTC" firstStartedPulling="2026-02-14 04:32:00.219663981 +0000 UTC m=+1352.300601295" lastFinishedPulling="2026-02-14 04:32:02.589943311 +0000 UTC m=+1354.670880625" observedRunningTime="2026-02-14 04:32:04.157787316 +0000 UTC m=+1356.238724630" watchObservedRunningTime="2026-02-14 04:32:04.16474163 +0000 UTC m=+1356.245678944" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.173806 4867 generic.go:334] "Generic (PLEG): container finished" podID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerID="5aef47de2b98909844392965ecce12a94c4a0b4e3f7b14facabcf28be59312be" exitCode=0 Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.173922 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f83c90-35bd-4d40-90e4-f992c7844a5d","Type":"ContainerDied","Data":"5aef47de2b98909844392965ecce12a94c4a0b4e3f7b14facabcf28be59312be"} Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.174800 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7f6876db8-kxmgv" podStartSLOduration=3.798682872 podStartE2EDuration="6.174774176s" podCreationTimestamp="2026-02-14 04:31:58 +0000 UTC" firstStartedPulling="2026-02-14 04:32:00.219962319 +0000 UTC m=+1352.300899633" lastFinishedPulling="2026-02-14 04:32:02.596053623 +0000 UTC m=+1354.676990937" observedRunningTime="2026-02-14 04:32:04.136861332 +0000 UTC m=+1356.217798646" watchObservedRunningTime="2026-02-14 04:32:04.174774176 +0000 UTC m=+1356.255711490" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.196680 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"defe0915-1f3e-4357-ba66-529a3801b279","Type":"ContainerStarted","Data":"016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597"} Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.196852 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="defe0915-1f3e-4357-ba66-529a3801b279" containerName="cinder-api-log" containerID="cri-o://840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24" gracePeriod=30 Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.196936 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.197062 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="defe0915-1f3e-4357-ba66-529a3801b279" containerName="cinder-api" containerID="cri-o://016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597" gracePeriod=30 Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.231972 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" event={"ID":"746b9097-84d0-4d00-a92c-808df9206d8a","Type":"ContainerStarted","Data":"bbbd663b685e649aba7c6d25d4f5d873760ad4fd5bfb839326762e3bc9aeb52d"} Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.232296 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.252693 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b6c55469-3aa2-4471-932a-442ce56570a7","Type":"ContainerStarted","Data":"ecbff86946fb366e485d44a146ac2998664c52a1b60ad30dd4585b7cf70bfede"} Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.275931 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.275907456 podStartE2EDuration="5.275907456s" podCreationTimestamp="2026-02-14 04:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:04.23112899 +0000 UTC m=+1356.312066304" watchObservedRunningTime="2026-02-14 04:32:04.275907456 +0000 UTC m=+1356.356844770" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.311672 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" podStartSLOduration=5.311643903 podStartE2EDuration="5.311643903s" podCreationTimestamp="2026-02-14 04:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:04.259547063 +0000 UTC m=+1356.340484377" watchObservedRunningTime="2026-02-14 04:32:04.311643903 +0000 UTC m=+1356.392581217" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.447569 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.547184 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f83c90-35bd-4d40-90e4-f992c7844a5d-run-httpd\") pod \"20f83c90-35bd-4d40-90e4-f992c7844a5d\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.547250 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-sg-core-conf-yaml\") pod \"20f83c90-35bd-4d40-90e4-f992c7844a5d\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.547286 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-combined-ca-bundle\") pod \"20f83c90-35bd-4d40-90e4-f992c7844a5d\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.547351 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-scripts\") pod \"20f83c90-35bd-4d40-90e4-f992c7844a5d\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.547392 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tmkx\" (UniqueName: \"kubernetes.io/projected/20f83c90-35bd-4d40-90e4-f992c7844a5d-kube-api-access-6tmkx\") pod \"20f83c90-35bd-4d40-90e4-f992c7844a5d\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.547609 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f83c90-35bd-4d40-90e4-f992c7844a5d-log-httpd\") pod \"20f83c90-35bd-4d40-90e4-f992c7844a5d\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.547604 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20f83c90-35bd-4d40-90e4-f992c7844a5d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "20f83c90-35bd-4d40-90e4-f992c7844a5d" (UID: "20f83c90-35bd-4d40-90e4-f992c7844a5d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.547634 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-config-data\") pod \"20f83c90-35bd-4d40-90e4-f992c7844a5d\" (UID: \"20f83c90-35bd-4d40-90e4-f992c7844a5d\") " Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.548931 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f83c90-35bd-4d40-90e4-f992c7844a5d-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.549076 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20f83c90-35bd-4d40-90e4-f992c7844a5d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "20f83c90-35bd-4d40-90e4-f992c7844a5d" (UID: "20f83c90-35bd-4d40-90e4-f992c7844a5d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.555228 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20f83c90-35bd-4d40-90e4-f992c7844a5d-kube-api-access-6tmkx" (OuterVolumeSpecName: "kube-api-access-6tmkx") pod "20f83c90-35bd-4d40-90e4-f992c7844a5d" (UID: "20f83c90-35bd-4d40-90e4-f992c7844a5d"). InnerVolumeSpecName "kube-api-access-6tmkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.555859 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-scripts" (OuterVolumeSpecName: "scripts") pod "20f83c90-35bd-4d40-90e4-f992c7844a5d" (UID: "20f83c90-35bd-4d40-90e4-f992c7844a5d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.612858 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "20f83c90-35bd-4d40-90e4-f992c7844a5d" (UID: "20f83c90-35bd-4d40-90e4-f992c7844a5d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.651749 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f83c90-35bd-4d40-90e4-f992c7844a5d-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.652180 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.652195 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.652207 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tmkx\" (UniqueName: \"kubernetes.io/projected/20f83c90-35bd-4d40-90e4-f992c7844a5d-kube-api-access-6tmkx\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.744602 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-config-data" (OuterVolumeSpecName: "config-data") pod "20f83c90-35bd-4d40-90e4-f992c7844a5d" (UID: "20f83c90-35bd-4d40-90e4-f992c7844a5d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.755342 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.791100 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20f83c90-35bd-4d40-90e4-f992c7844a5d" (UID: "20f83c90-35bd-4d40-90e4-f992c7844a5d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:04 crc kubenswrapper[4867]: I0214 04:32:04.858451 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20f83c90-35bd-4d40-90e4-f992c7844a5d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.026015 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ead79748-92fd-4acc-9abb-e5d73a7be7da" path="/var/lib/kubelet/pods/ead79748-92fd-4acc-9abb-e5d73a7be7da/volumes" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.108438 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.265711 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b6c55469-3aa2-4471-932a-442ce56570a7","Type":"ContainerStarted","Data":"972cae5e159f32657523e5994c8475d8de82cc180b3a8e9a74d4c60a95877238"} Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.268999 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f83c90-35bd-4d40-90e4-f992c7844a5d","Type":"ContainerDied","Data":"fec759d47361c43e0a7e0280d89486799080a9e793713da877ee4655c98870f4"} Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.269062 4867 scope.go:117] "RemoveContainer" containerID="c3cf8cc9c9af14899e3e42c8a5806f199da51be9cd935b737e6e52767602944f" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.269243 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.272930 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-config-data-custom\") pod \"defe0915-1f3e-4357-ba66-529a3801b279\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.272983 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhr5h\" (UniqueName: \"kubernetes.io/projected/defe0915-1f3e-4357-ba66-529a3801b279-kube-api-access-dhr5h\") pod \"defe0915-1f3e-4357-ba66-529a3801b279\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.273026 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/defe0915-1f3e-4357-ba66-529a3801b279-etc-machine-id\") pod \"defe0915-1f3e-4357-ba66-529a3801b279\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.273076 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-combined-ca-bundle\") pod \"defe0915-1f3e-4357-ba66-529a3801b279\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.273173 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/defe0915-1f3e-4357-ba66-529a3801b279-logs\") pod \"defe0915-1f3e-4357-ba66-529a3801b279\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.273272 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-scripts\") pod \"defe0915-1f3e-4357-ba66-529a3801b279\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.273330 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-config-data\") pod \"defe0915-1f3e-4357-ba66-529a3801b279\" (UID: \"defe0915-1f3e-4357-ba66-529a3801b279\") " Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.274000 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/defe0915-1f3e-4357-ba66-529a3801b279-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "defe0915-1f3e-4357-ba66-529a3801b279" (UID: "defe0915-1f3e-4357-ba66-529a3801b279"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.275876 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/defe0915-1f3e-4357-ba66-529a3801b279-logs" (OuterVolumeSpecName: "logs") pod "defe0915-1f3e-4357-ba66-529a3801b279" (UID: "defe0915-1f3e-4357-ba66-529a3801b279"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.281819 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-scripts" (OuterVolumeSpecName: "scripts") pod "defe0915-1f3e-4357-ba66-529a3801b279" (UID: "defe0915-1f3e-4357-ba66-529a3801b279"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.282440 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/defe0915-1f3e-4357-ba66-529a3801b279-kube-api-access-dhr5h" (OuterVolumeSpecName: "kube-api-access-dhr5h") pod "defe0915-1f3e-4357-ba66-529a3801b279" (UID: "defe0915-1f3e-4357-ba66-529a3801b279"). InnerVolumeSpecName "kube-api-access-dhr5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.283612 4867 generic.go:334] "Generic (PLEG): container finished" podID="defe0915-1f3e-4357-ba66-529a3801b279" containerID="016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597" exitCode=0 Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.283654 4867 generic.go:334] "Generic (PLEG): container finished" podID="defe0915-1f3e-4357-ba66-529a3801b279" containerID="840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24" exitCode=143 Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.284545 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"defe0915-1f3e-4357-ba66-529a3801b279","Type":"ContainerDied","Data":"016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597"} Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.284585 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"defe0915-1f3e-4357-ba66-529a3801b279","Type":"ContainerDied","Data":"840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24"} Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.284597 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"defe0915-1f3e-4357-ba66-529a3801b279","Type":"ContainerDied","Data":"b7df10ba039fcee4e4b9fcdc56451b0c829f6865f31aa92d7e75f9d0c4ffbbef"} Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.284644 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.293606 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "defe0915-1f3e-4357-ba66-529a3801b279" (UID: "defe0915-1f3e-4357-ba66-529a3801b279"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.303527 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.406744463 podStartE2EDuration="6.303488716s" podCreationTimestamp="2026-02-14 04:31:59 +0000 UTC" firstStartedPulling="2026-02-14 04:32:00.529381798 +0000 UTC m=+1352.610319112" lastFinishedPulling="2026-02-14 04:32:01.426126051 +0000 UTC m=+1353.507063365" observedRunningTime="2026-02-14 04:32:05.293973074 +0000 UTC m=+1357.374910388" watchObservedRunningTime="2026-02-14 04:32:05.303488716 +0000 UTC m=+1357.384426030" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.323032 4867 scope.go:117] "RemoveContainer" containerID="80b2feaac0df4a17c38e5c52338aa4756e2f98cfb9c0f642287cd39641d2aa47" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.326263 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "defe0915-1f3e-4357-ba66-529a3801b279" (UID: "defe0915-1f3e-4357-ba66-529a3801b279"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.357406 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.379433 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.379467 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.379477 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhr5h\" (UniqueName: \"kubernetes.io/projected/defe0915-1f3e-4357-ba66-529a3801b279-kube-api-access-dhr5h\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.379487 4867 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/defe0915-1f3e-4357-ba66-529a3801b279-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.379526 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.379537 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/defe0915-1f3e-4357-ba66-529a3801b279-logs\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.379845 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.387748 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-config-data" (OuterVolumeSpecName: "config-data") pod "defe0915-1f3e-4357-ba66-529a3801b279" (UID: "defe0915-1f3e-4357-ba66-529a3801b279"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.397131 4867 scope.go:117] "RemoveContainer" containerID="5aef47de2b98909844392965ecce12a94c4a0b4e3f7b14facabcf28be59312be" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.418663 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:05 crc kubenswrapper[4867]: E0214 04:32:05.419296 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerName="ceilometer-notification-agent" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.419328 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerName="ceilometer-notification-agent" Feb 14 04:32:05 crc kubenswrapper[4867]: E0214 04:32:05.419354 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="defe0915-1f3e-4357-ba66-529a3801b279" containerName="cinder-api" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.419363 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="defe0915-1f3e-4357-ba66-529a3801b279" containerName="cinder-api" Feb 14 04:32:05 crc kubenswrapper[4867]: E0214 04:32:05.419384 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="defe0915-1f3e-4357-ba66-529a3801b279" containerName="cinder-api-log" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.419393 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="defe0915-1f3e-4357-ba66-529a3801b279" containerName="cinder-api-log" Feb 14 04:32:05 crc kubenswrapper[4867]: E0214 04:32:05.419423 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerName="proxy-httpd" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.419434 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerName="proxy-httpd" Feb 14 04:32:05 crc kubenswrapper[4867]: E0214 04:32:05.419446 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerName="ceilometer-central-agent" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.419455 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerName="ceilometer-central-agent" Feb 14 04:32:05 crc kubenswrapper[4867]: E0214 04:32:05.419483 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerName="sg-core" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.419492 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerName="sg-core" Feb 14 04:32:05 crc kubenswrapper[4867]: E0214 04:32:05.419568 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ead79748-92fd-4acc-9abb-e5d73a7be7da" containerName="init" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.419580 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ead79748-92fd-4acc-9abb-e5d73a7be7da" containerName="init" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.419890 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="defe0915-1f3e-4357-ba66-529a3801b279" containerName="cinder-api-log" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.419912 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ead79748-92fd-4acc-9abb-e5d73a7be7da" containerName="init" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.419926 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="defe0915-1f3e-4357-ba66-529a3801b279" containerName="cinder-api" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.419935 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerName="ceilometer-notification-agent" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.419945 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerName="proxy-httpd" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.419963 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerName="sg-core" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.419978 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" containerName="ceilometer-central-agent" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.436346 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.442189 4867 scope.go:117] "RemoveContainer" containerID="12c007eaf3f2f0273b4b97ee67fcb41bee882cea55e4b7022e88e2bd510463b3" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.442467 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.443017 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.446025 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.481818 4867 scope.go:117] "RemoveContainer" containerID="016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.483904 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/defe0915-1f3e-4357-ba66-529a3801b279-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.514295 4867 scope.go:117] "RemoveContainer" containerID="840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.542664 4867 scope.go:117] "RemoveContainer" containerID="016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597" Feb 14 04:32:05 crc kubenswrapper[4867]: E0214 04:32:05.543167 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597\": container with ID starting with 016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597 not found: ID does not exist" containerID="016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.543200 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597"} err="failed to get container status \"016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597\": rpc error: code = NotFound desc = could not find container \"016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597\": container with ID starting with 016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597 not found: ID does not exist" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.543232 4867 scope.go:117] "RemoveContainer" containerID="840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24" Feb 14 04:32:05 crc kubenswrapper[4867]: E0214 04:32:05.543578 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24\": container with ID starting with 840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24 not found: ID does not exist" containerID="840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.543633 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24"} err="failed to get container status \"840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24\": rpc error: code = NotFound desc = could not find container \"840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24\": container with ID starting with 840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24 not found: ID does not exist" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.543694 4867 scope.go:117] "RemoveContainer" containerID="016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.543942 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597"} err="failed to get container status \"016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597\": rpc error: code = NotFound desc = could not find container \"016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597\": container with ID starting with 016421f8d1fadaca0abec6bb1a08cd7059d9199b8b1337fce2ac9c878f82f597 not found: ID does not exist" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.543966 4867 scope.go:117] "RemoveContainer" containerID="840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.544157 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24"} err="failed to get container status \"840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24\": rpc error: code = NotFound desc = could not find container \"840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24\": container with ID starting with 840888b4eb2d6ca224cd2d23e11a1c6d063d10b85bbe55c18a106f69c4fb5e24 not found: ID does not exist" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.586248 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bfq7\" (UniqueName: \"kubernetes.io/projected/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-kube-api-access-4bfq7\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.586421 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.586597 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-scripts\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.587170 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-config-data\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.587234 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.587306 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-run-httpd\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.587429 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-log-httpd\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.659662 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.673378 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.689687 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-config-data\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.689993 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.690155 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-run-httpd\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.690299 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-log-httpd\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.690410 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bfq7\" (UniqueName: \"kubernetes.io/projected/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-kube-api-access-4bfq7\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.690584 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.690718 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-scripts\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.691543 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-run-httpd\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.691568 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-log-httpd\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.698366 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-scripts\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.698989 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.700118 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-config-data\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.700802 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.708317 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.715077 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.730666 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.730835 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bfq7\" (UniqueName: \"kubernetes.io/projected/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-kube-api-access-4bfq7\") pod \"ceilometer-0\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.736591 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.748873 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.749140 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.775637 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.789587 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-584d8cfdf8-4lt8c"] Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.806481 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.811611 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.811941 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.839703 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-584d8cfdf8-4lt8c"] Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.914408 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmlq4\" (UniqueName: \"kubernetes.io/projected/195db0d6-0991-48b6-a7a1-ad5311555ede-kube-api-access-bmlq4\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.914471 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3375fa12-2e3a-431e-9341-72d5a213083e-config-data-custom\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.914524 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlvsb\" (UniqueName: \"kubernetes.io/projected/3375fa12-2e3a-431e-9341-72d5a213083e-kube-api-access-jlvsb\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.914570 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.914607 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-config-data-custom\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.914644 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/195db0d6-0991-48b6-a7a1-ad5311555ede-logs\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.914708 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3375fa12-2e3a-431e-9341-72d5a213083e-internal-tls-certs\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.914748 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-public-tls-certs\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.914813 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3375fa12-2e3a-431e-9341-72d5a213083e-logs\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.914853 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3375fa12-2e3a-431e-9341-72d5a213083e-config-data\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.914886 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3375fa12-2e3a-431e-9341-72d5a213083e-combined-ca-bundle\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.914946 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-config-data\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.914986 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3375fa12-2e3a-431e-9341-72d5a213083e-public-tls-certs\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.915008 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-scripts\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.915061 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/195db0d6-0991-48b6-a7a1-ad5311555ede-etc-machine-id\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:05 crc kubenswrapper[4867]: I0214 04:32:05.915132 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.016675 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/195db0d6-0991-48b6-a7a1-ad5311555ede-etc-machine-id\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.016980 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.017067 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/195db0d6-0991-48b6-a7a1-ad5311555ede-etc-machine-id\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.017066 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmlq4\" (UniqueName: \"kubernetes.io/projected/195db0d6-0991-48b6-a7a1-ad5311555ede-kube-api-access-bmlq4\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.017472 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3375fa12-2e3a-431e-9341-72d5a213083e-config-data-custom\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.017531 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlvsb\" (UniqueName: \"kubernetes.io/projected/3375fa12-2e3a-431e-9341-72d5a213083e-kube-api-access-jlvsb\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.017582 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.017626 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-config-data-custom\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.017668 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/195db0d6-0991-48b6-a7a1-ad5311555ede-logs\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.017746 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3375fa12-2e3a-431e-9341-72d5a213083e-internal-tls-certs\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.017809 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-public-tls-certs\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.017879 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3375fa12-2e3a-431e-9341-72d5a213083e-logs\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.017918 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3375fa12-2e3a-431e-9341-72d5a213083e-config-data\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.017942 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3375fa12-2e3a-431e-9341-72d5a213083e-combined-ca-bundle\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.018014 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-config-data\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.018061 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3375fa12-2e3a-431e-9341-72d5a213083e-public-tls-certs\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.018082 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-scripts\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.018293 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/195db0d6-0991-48b6-a7a1-ad5311555ede-logs\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.021583 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3375fa12-2e3a-431e-9341-72d5a213083e-logs\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.027237 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-scripts\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.027267 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-public-tls-certs\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.028247 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3375fa12-2e3a-431e-9341-72d5a213083e-combined-ca-bundle\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.031186 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3375fa12-2e3a-431e-9341-72d5a213083e-public-tls-certs\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.031752 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.035209 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3375fa12-2e3a-431e-9341-72d5a213083e-config-data-custom\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.035280 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.036307 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3375fa12-2e3a-431e-9341-72d5a213083e-config-data\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.037069 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-config-data\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.037235 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3375fa12-2e3a-431e-9341-72d5a213083e-internal-tls-certs\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.037313 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/195db0d6-0991-48b6-a7a1-ad5311555ede-config-data-custom\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.040268 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmlq4\" (UniqueName: \"kubernetes.io/projected/195db0d6-0991-48b6-a7a1-ad5311555ede-kube-api-access-bmlq4\") pod \"cinder-api-0\" (UID: \"195db0d6-0991-48b6-a7a1-ad5311555ede\") " pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.041518 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlvsb\" (UniqueName: \"kubernetes.io/projected/3375fa12-2e3a-431e-9341-72d5a213083e-kube-api-access-jlvsb\") pod \"barbican-api-584d8cfdf8-4lt8c\" (UID: \"3375fa12-2e3a-431e-9341-72d5a213083e\") " pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.276004 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.288128 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.422996 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.679350 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.840066 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.952831 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-569c46898f-bbd5l"] Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.953332 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-569c46898f-bbd5l" podUID="8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" containerName="neutron-api" containerID="cri-o://df38319c35b43b20a57003cff86a29347a0b01099020f21394a48e3029dd9a34" gracePeriod=30 Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.954179 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-569c46898f-bbd5l" podUID="8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" containerName="neutron-httpd" containerID="cri-o://f445405ff2670ec25765e689c899369e6b86208982965111c8fd6b86edd2a3f9" gracePeriod=30 Feb 14 04:32:06 crc kubenswrapper[4867]: I0214 04:32:06.994611 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.096844 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20f83c90-35bd-4d40-90e4-f992c7844a5d" path="/var/lib/kubelet/pods/20f83c90-35bd-4d40-90e4-f992c7844a5d/volumes" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.098226 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="defe0915-1f3e-4357-ba66-529a3801b279" path="/var/lib/kubelet/pods/defe0915-1f3e-4357-ba66-529a3801b279/volumes" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.099081 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7886d5654f-wzr2s"] Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.102302 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7886d5654f-wzr2s"] Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.102329 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-584d8cfdf8-4lt8c"] Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.102407 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: W0214 04:32:07.132480 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3375fa12_2e3a_431e_9341_72d5a213083e.slice/crio-144d7358dce1482e969a6c4d4aa4368c97382a613b0107504c81e13a057467b2 WatchSource:0}: Error finding container 144d7358dce1482e969a6c4d4aa4368c97382a613b0107504c81e13a057467b2: Status 404 returned error can't find the container with id 144d7358dce1482e969a6c4d4aa4368c97382a613b0107504c81e13a057467b2 Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.156617 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-combined-ca-bundle\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.156725 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-public-tls-certs\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.156749 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn54r\" (UniqueName: \"kubernetes.io/projected/d4a16bfe-366a-4143-932a-e0b51615c401-kube-api-access-xn54r\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.156784 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-config\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.156834 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-ovndb-tls-certs\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.156865 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-internal-tls-certs\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.156953 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-httpd-config\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.260111 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-combined-ca-bundle\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.260232 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-public-tls-certs\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.260264 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn54r\" (UniqueName: \"kubernetes.io/projected/d4a16bfe-366a-4143-932a-e0b51615c401-kube-api-access-xn54r\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.260299 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-config\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.260374 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-ovndb-tls-certs\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.260429 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-internal-tls-certs\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.260572 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-httpd-config\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.270469 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-internal-tls-certs\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.272036 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-combined-ca-bundle\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.279213 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-config\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.280154 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-httpd-config\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.282452 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-ovndb-tls-certs\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.287179 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn54r\" (UniqueName: \"kubernetes.io/projected/d4a16bfe-366a-4143-932a-e0b51615c401-kube-api-access-xn54r\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.292606 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a16bfe-366a-4143-932a-e0b51615c401-public-tls-certs\") pod \"neutron-7886d5654f-wzr2s\" (UID: \"d4a16bfe-366a-4143-932a-e0b51615c401\") " pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.344624 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ce36665-fb1a-4860-bc8a-5e12431d4cd6","Type":"ContainerStarted","Data":"c5af4b5f8602cd5b59f39b9b073911fd553022dc70a80e4fe1af5abd876f1920"} Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.348523 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-584d8cfdf8-4lt8c" event={"ID":"3375fa12-2e3a-431e-9341-72d5a213083e","Type":"ContainerStarted","Data":"144d7358dce1482e969a6c4d4aa4368c97382a613b0107504c81e13a057467b2"} Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.350613 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"195db0d6-0991-48b6-a7a1-ad5311555ede","Type":"ContainerStarted","Data":"a44dda6f8296393dedba55dfb959cf2361267f611586efd623055a585266e1ef"} Feb 14 04:32:07 crc kubenswrapper[4867]: I0214 04:32:07.482895 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:08 crc kubenswrapper[4867]: I0214 04:32:08.420840 4867 generic.go:334] "Generic (PLEG): container finished" podID="8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" containerID="f445405ff2670ec25765e689c899369e6b86208982965111c8fd6b86edd2a3f9" exitCode=0 Feb 14 04:32:08 crc kubenswrapper[4867]: I0214 04:32:08.420928 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-569c46898f-bbd5l" event={"ID":"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d","Type":"ContainerDied","Data":"f445405ff2670ec25765e689c899369e6b86208982965111c8fd6b86edd2a3f9"} Feb 14 04:32:08 crc kubenswrapper[4867]: I0214 04:32:08.425845 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ce36665-fb1a-4860-bc8a-5e12431d4cd6","Type":"ContainerStarted","Data":"fa147253ee7488f81ea6eca1453e9afe783991b356d4806c11a9a0f690b9282a"} Feb 14 04:32:08 crc kubenswrapper[4867]: I0214 04:32:08.434023 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-584d8cfdf8-4lt8c" event={"ID":"3375fa12-2e3a-431e-9341-72d5a213083e","Type":"ContainerStarted","Data":"18cd9262ede2c3ab09044a01019d623017616a5fa4d03ea3db50d9c90f8a8f5d"} Feb 14 04:32:08 crc kubenswrapper[4867]: I0214 04:32:08.434063 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-584d8cfdf8-4lt8c" event={"ID":"3375fa12-2e3a-431e-9341-72d5a213083e","Type":"ContainerStarted","Data":"503b65e504ecf9985fd72e4c51681c3c1b4b6bf77287a996db21c0ac81e2c2de"} Feb 14 04:32:08 crc kubenswrapper[4867]: I0214 04:32:08.434096 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:08 crc kubenswrapper[4867]: I0214 04:32:08.434116 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:08 crc kubenswrapper[4867]: I0214 04:32:08.453813 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-584d8cfdf8-4lt8c" podStartSLOduration=3.453794979 podStartE2EDuration="3.453794979s" podCreationTimestamp="2026-02-14 04:32:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:08.450785608 +0000 UTC m=+1360.531722922" watchObservedRunningTime="2026-02-14 04:32:08.453794979 +0000 UTC m=+1360.534732293" Feb 14 04:32:08 crc kubenswrapper[4867]: I0214 04:32:08.468847 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"195db0d6-0991-48b6-a7a1-ad5311555ede","Type":"ContainerStarted","Data":"d8c5c9f74ccd78823f9d33bdc90facd5590dbe73c66e68e9f9f90cdf5225e85c"} Feb 14 04:32:08 crc kubenswrapper[4867]: W0214 04:32:08.501768 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4a16bfe_366a_4143_932a_e0b51615c401.slice/crio-8f2bf0f6910639058db96bcd1da70119bfecc6e8fd63bf7fd1c5af30dbd9f9c9 WatchSource:0}: Error finding container 8f2bf0f6910639058db96bcd1da70119bfecc6e8fd63bf7fd1c5af30dbd9f9c9: Status 404 returned error can't find the container with id 8f2bf0f6910639058db96bcd1da70119bfecc6e8fd63bf7fd1c5af30dbd9f9c9 Feb 14 04:32:08 crc kubenswrapper[4867]: I0214 04:32:08.503271 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7886d5654f-wzr2s"] Feb 14 04:32:09 crc kubenswrapper[4867]: I0214 04:32:09.484770 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ce36665-fb1a-4860-bc8a-5e12431d4cd6","Type":"ContainerStarted","Data":"48f01dc9aa282450371f6297a6c143b96aef3bdcad1b711eb94a51bfc381c6b0"} Feb 14 04:32:09 crc kubenswrapper[4867]: I0214 04:32:09.485243 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ce36665-fb1a-4860-bc8a-5e12431d4cd6","Type":"ContainerStarted","Data":"fe5aa9c47c46abdc1b30cca0eb25c76a83c0676a5128f68950adc248471821b2"} Feb 14 04:32:09 crc kubenswrapper[4867]: I0214 04:32:09.486715 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7886d5654f-wzr2s" event={"ID":"d4a16bfe-366a-4143-932a-e0b51615c401","Type":"ContainerStarted","Data":"5e0a62aa6ec3491a2cf67a13cda5ff17befc72d1618cee92b1cfc69b6aa572e0"} Feb 14 04:32:09 crc kubenswrapper[4867]: I0214 04:32:09.486737 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7886d5654f-wzr2s" event={"ID":"d4a16bfe-366a-4143-932a-e0b51615c401","Type":"ContainerStarted","Data":"4c03492a1b05456f7e21cb68a1fce0332c5fc554391765af8a0d2c450f2b4455"} Feb 14 04:32:09 crc kubenswrapper[4867]: I0214 04:32:09.486746 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7886d5654f-wzr2s" event={"ID":"d4a16bfe-366a-4143-932a-e0b51615c401","Type":"ContainerStarted","Data":"8f2bf0f6910639058db96bcd1da70119bfecc6e8fd63bf7fd1c5af30dbd9f9c9"} Feb 14 04:32:09 crc kubenswrapper[4867]: I0214 04:32:09.488332 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:09 crc kubenswrapper[4867]: I0214 04:32:09.493593 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"195db0d6-0991-48b6-a7a1-ad5311555ede","Type":"ContainerStarted","Data":"6a35c006524f36990453cacbcd07435b4ee94829298141ee3c860cd141deda2f"} Feb 14 04:32:09 crc kubenswrapper[4867]: I0214 04:32:09.509031 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7886d5654f-wzr2s" podStartSLOduration=3.5090071419999997 podStartE2EDuration="3.509007142s" podCreationTimestamp="2026-02-14 04:32:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:09.506694899 +0000 UTC m=+1361.587632213" watchObservedRunningTime="2026-02-14 04:32:09.509007142 +0000 UTC m=+1361.589944466" Feb 14 04:32:09 crc kubenswrapper[4867]: I0214 04:32:09.540800 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.540775485 podStartE2EDuration="4.540775485s" podCreationTimestamp="2026-02-14 04:32:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:09.533176141 +0000 UTC m=+1361.614113465" watchObservedRunningTime="2026-02-14 04:32:09.540775485 +0000 UTC m=+1361.621712799" Feb 14 04:32:09 crc kubenswrapper[4867]: I0214 04:32:09.641441 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-569c46898f-bbd5l" podUID="8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.194:9696/\": dial tcp 10.217.0.194:9696: connect: connection refused" Feb 14 04:32:09 crc kubenswrapper[4867]: I0214 04:32:09.829398 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 14 04:32:10 crc kubenswrapper[4867]: I0214 04:32:10.096010 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 14 04:32:10 crc kubenswrapper[4867]: I0214 04:32:10.122835 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:32:10 crc kubenswrapper[4867]: I0214 04:32:10.189917 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-zkb5z"] Feb 14 04:32:10 crc kubenswrapper[4867]: I0214 04:32:10.194853 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" podUID="41682938-f603-460d-91e2-9de423799697" containerName="dnsmasq-dns" containerID="cri-o://3fa0ecdd88a94efe2f93d06bd0c02307c78ae77450f27f456086d11f4e56cff0" gracePeriod=10 Feb 14 04:32:10 crc kubenswrapper[4867]: I0214 04:32:10.508716 4867 generic.go:334] "Generic (PLEG): container finished" podID="41682938-f603-460d-91e2-9de423799697" containerID="3fa0ecdd88a94efe2f93d06bd0c02307c78ae77450f27f456086d11f4e56cff0" exitCode=0 Feb 14 04:32:10 crc kubenswrapper[4867]: I0214 04:32:10.508829 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" event={"ID":"41682938-f603-460d-91e2-9de423799697","Type":"ContainerDied","Data":"3fa0ecdd88a94efe2f93d06bd0c02307c78ae77450f27f456086d11f4e56cff0"} Feb 14 04:32:10 crc kubenswrapper[4867]: I0214 04:32:10.509836 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 14 04:32:10 crc kubenswrapper[4867]: I0214 04:32:10.579225 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 04:32:10 crc kubenswrapper[4867]: I0214 04:32:10.829729 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:32:10 crc kubenswrapper[4867]: I0214 04:32:10.896669 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-config\") pod \"41682938-f603-460d-91e2-9de423799697\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " Feb 14 04:32:10 crc kubenswrapper[4867]: I0214 04:32:10.896881 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-ovsdbserver-sb\") pod \"41682938-f603-460d-91e2-9de423799697\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " Feb 14 04:32:10 crc kubenswrapper[4867]: I0214 04:32:10.896952 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwkw4\" (UniqueName: \"kubernetes.io/projected/41682938-f603-460d-91e2-9de423799697-kube-api-access-bwkw4\") pod \"41682938-f603-460d-91e2-9de423799697\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " Feb 14 04:32:10 crc kubenswrapper[4867]: I0214 04:32:10.897062 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-ovsdbserver-nb\") pod \"41682938-f603-460d-91e2-9de423799697\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " Feb 14 04:32:10 crc kubenswrapper[4867]: I0214 04:32:10.897115 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-dns-swift-storage-0\") pod \"41682938-f603-460d-91e2-9de423799697\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " Feb 14 04:32:10 crc kubenswrapper[4867]: I0214 04:32:10.897154 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-dns-svc\") pod \"41682938-f603-460d-91e2-9de423799697\" (UID: \"41682938-f603-460d-91e2-9de423799697\") " Feb 14 04:32:10 crc kubenswrapper[4867]: I0214 04:32:10.917251 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41682938-f603-460d-91e2-9de423799697-kube-api-access-bwkw4" (OuterVolumeSpecName: "kube-api-access-bwkw4") pod "41682938-f603-460d-91e2-9de423799697" (UID: "41682938-f603-460d-91e2-9de423799697"). InnerVolumeSpecName "kube-api-access-bwkw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.006418 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwkw4\" (UniqueName: \"kubernetes.io/projected/41682938-f603-460d-91e2-9de423799697-kube-api-access-bwkw4\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.033380 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "41682938-f603-460d-91e2-9de423799697" (UID: "41682938-f603-460d-91e2-9de423799697"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.037112 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "41682938-f603-460d-91e2-9de423799697" (UID: "41682938-f603-460d-91e2-9de423799697"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.085305 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "41682938-f603-460d-91e2-9de423799697" (UID: "41682938-f603-460d-91e2-9de423799697"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.087918 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-config" (OuterVolumeSpecName: "config") pod "41682938-f603-460d-91e2-9de423799697" (UID: "41682938-f603-460d-91e2-9de423799697"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.100073 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "41682938-f603-460d-91e2-9de423799697" (UID: "41682938-f603-460d-91e2-9de423799697"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.108927 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.108959 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.108970 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.108978 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.108986 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41682938-f603-460d-91e2-9de423799697-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.523378 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ce36665-fb1a-4860-bc8a-5e12431d4cd6","Type":"ContainerStarted","Data":"fbab9809e65a478959fcc20b95a52910111448975d370afd8952ef2712282827"} Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.523876 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.533737 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" event={"ID":"41682938-f603-460d-91e2-9de423799697","Type":"ContainerDied","Data":"fb9de469ce205f58ab8b9cb9fe410a6dc2ae4ce6eea561956a614622a54d90eb"} Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.533801 4867 scope.go:117] "RemoveContainer" containerID="3fa0ecdd88a94efe2f93d06bd0c02307c78ae77450f27f456086d11f4e56cff0" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.533948 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b6c55469-3aa2-4471-932a-442ce56570a7" containerName="cinder-scheduler" containerID="cri-o://ecbff86946fb366e485d44a146ac2998664c52a1b60ad30dd4585b7cf70bfede" gracePeriod=30 Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.534005 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-zkb5z" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.534065 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b6c55469-3aa2-4471-932a-442ce56570a7" containerName="probe" containerID="cri-o://972cae5e159f32657523e5994c8475d8de82cc180b3a8e9a74d4c60a95877238" gracePeriod=30 Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.604452 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.192844962 podStartE2EDuration="6.604426202s" podCreationTimestamp="2026-02-14 04:32:05 +0000 UTC" firstStartedPulling="2026-02-14 04:32:06.435732149 +0000 UTC m=+1358.516669463" lastFinishedPulling="2026-02-14 04:32:10.847313389 +0000 UTC m=+1362.928250703" observedRunningTime="2026-02-14 04:32:11.570493941 +0000 UTC m=+1363.651431255" watchObservedRunningTime="2026-02-14 04:32:11.604426202 +0000 UTC m=+1363.685363516" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.625738 4867 scope.go:117] "RemoveContainer" containerID="89d6a8bcac13fc998b43875a988468666140ff6de2472314fab3fcf4097c9cae" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.666936 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-zkb5z"] Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.684803 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-zkb5z"] Feb 14 04:32:11 crc kubenswrapper[4867]: E0214 04:32:11.846125 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41682938_f603_460d_91e2_9de423799697.slice/crio-fb9de469ce205f58ab8b9cb9fe410a6dc2ae4ce6eea561956a614622a54d90eb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41682938_f603_460d_91e2_9de423799697.slice\": RecentStats: unable to find data in memory cache]" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.852258 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:32:11 crc kubenswrapper[4867]: I0214 04:32:11.951839 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:32:12 crc kubenswrapper[4867]: I0214 04:32:12.547472 4867 generic.go:334] "Generic (PLEG): container finished" podID="8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" containerID="df38319c35b43b20a57003cff86a29347a0b01099020f21394a48e3029dd9a34" exitCode=0 Feb 14 04:32:12 crc kubenswrapper[4867]: I0214 04:32:12.547560 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-569c46898f-bbd5l" event={"ID":"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d","Type":"ContainerDied","Data":"df38319c35b43b20a57003cff86a29347a0b01099020f21394a48e3029dd9a34"} Feb 14 04:32:12 crc kubenswrapper[4867]: I0214 04:32:12.557878 4867 generic.go:334] "Generic (PLEG): container finished" podID="b6c55469-3aa2-4471-932a-442ce56570a7" containerID="972cae5e159f32657523e5994c8475d8de82cc180b3a8e9a74d4c60a95877238" exitCode=0 Feb 14 04:32:12 crc kubenswrapper[4867]: I0214 04:32:12.557956 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b6c55469-3aa2-4471-932a-442ce56570a7","Type":"ContainerDied","Data":"972cae5e159f32657523e5994c8475d8de82cc180b3a8e9a74d4c60a95877238"} Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.014288 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41682938-f603-460d-91e2-9de423799697" path="/var/lib/kubelet/pods/41682938-f603-460d-91e2-9de423799697/volumes" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.030151 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.171183 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-ovndb-tls-certs\") pod \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.171252 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-combined-ca-bundle\") pod \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.171348 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-internal-tls-certs\") pod \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.171386 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-httpd-config\") pod \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.171573 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-public-tls-certs\") pod \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.171686 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhvs2\" (UniqueName: \"kubernetes.io/projected/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-kube-api-access-lhvs2\") pod \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.171765 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-config\") pod \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\" (UID: \"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d\") " Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.185479 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-kube-api-access-lhvs2" (OuterVolumeSpecName: "kube-api-access-lhvs2") pod "8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" (UID: "8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d"). InnerVolumeSpecName "kube-api-access-lhvs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.191685 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" (UID: "8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.259666 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" (UID: "8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.263635 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" (UID: "8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.265605 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-config" (OuterVolumeSpecName: "config") pod "8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" (UID: "8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.275619 4867 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.275954 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhvs2\" (UniqueName: \"kubernetes.io/projected/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-kube-api-access-lhvs2\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.276026 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.276084 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.276146 4867 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.299152 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" (UID: "8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.356931 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" (UID: "8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.378660 4867 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.378692 4867 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.410957 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.582954 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv2hq\" (UniqueName: \"kubernetes.io/projected/b6c55469-3aa2-4471-932a-442ce56570a7-kube-api-access-kv2hq\") pod \"b6c55469-3aa2-4471-932a-442ce56570a7\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.584424 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-config-data\") pod \"b6c55469-3aa2-4471-932a-442ce56570a7\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.585791 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-config-data-custom\") pod \"b6c55469-3aa2-4471-932a-442ce56570a7\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.586081 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-combined-ca-bundle\") pod \"b6c55469-3aa2-4471-932a-442ce56570a7\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.586829 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-scripts\") pod \"b6c55469-3aa2-4471-932a-442ce56570a7\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.586957 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6c55469-3aa2-4471-932a-442ce56570a7-etc-machine-id\") pod \"b6c55469-3aa2-4471-932a-442ce56570a7\" (UID: \"b6c55469-3aa2-4471-932a-442ce56570a7\") " Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.587795 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6c55469-3aa2-4471-932a-442ce56570a7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b6c55469-3aa2-4471-932a-442ce56570a7" (UID: "b6c55469-3aa2-4471-932a-442ce56570a7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.590991 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6c55469-3aa2-4471-932a-442ce56570a7-kube-api-access-kv2hq" (OuterVolumeSpecName: "kube-api-access-kv2hq") pod "b6c55469-3aa2-4471-932a-442ce56570a7" (UID: "b6c55469-3aa2-4471-932a-442ce56570a7"). InnerVolumeSpecName "kube-api-access-kv2hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.595457 4867 generic.go:334] "Generic (PLEG): container finished" podID="b6c55469-3aa2-4471-932a-442ce56570a7" containerID="ecbff86946fb366e485d44a146ac2998664c52a1b60ad30dd4585b7cf70bfede" exitCode=0 Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.595641 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.596299 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b6c55469-3aa2-4471-932a-442ce56570a7","Type":"ContainerDied","Data":"ecbff86946fb366e485d44a146ac2998664c52a1b60ad30dd4585b7cf70bfede"} Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.596749 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b6c55469-3aa2-4471-932a-442ce56570a7","Type":"ContainerDied","Data":"3e15ae2331b94d3c6d65cab2376b0b1e088c96cfaa63266969feb367a3f3d213"} Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.596853 4867 scope.go:117] "RemoveContainer" containerID="972cae5e159f32657523e5994c8475d8de82cc180b3a8e9a74d4c60a95877238" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.597355 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b6c55469-3aa2-4471-932a-442ce56570a7" (UID: "b6c55469-3aa2-4471-932a-442ce56570a7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.603738 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-scripts" (OuterVolumeSpecName: "scripts") pod "b6c55469-3aa2-4471-932a-442ce56570a7" (UID: "b6c55469-3aa2-4471-932a-442ce56570a7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.624874 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-569c46898f-bbd5l" event={"ID":"8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d","Type":"ContainerDied","Data":"028f5efc08b53a55521858d44a43207730eee63dfa58503296592bae2f4868dd"} Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.625336 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-569c46898f-bbd5l" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.647711 4867 scope.go:117] "RemoveContainer" containerID="ecbff86946fb366e485d44a146ac2998664c52a1b60ad30dd4585b7cf70bfede" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.691663 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv2hq\" (UniqueName: \"kubernetes.io/projected/b6c55469-3aa2-4471-932a-442ce56570a7-kube-api-access-kv2hq\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.691698 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.691709 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.691720 4867 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6c55469-3aa2-4471-932a-442ce56570a7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.691848 4867 scope.go:117] "RemoveContainer" containerID="972cae5e159f32657523e5994c8475d8de82cc180b3a8e9a74d4c60a95877238" Feb 14 04:32:13 crc kubenswrapper[4867]: E0214 04:32:13.694972 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"972cae5e159f32657523e5994c8475d8de82cc180b3a8e9a74d4c60a95877238\": container with ID starting with 972cae5e159f32657523e5994c8475d8de82cc180b3a8e9a74d4c60a95877238 not found: ID does not exist" containerID="972cae5e159f32657523e5994c8475d8de82cc180b3a8e9a74d4c60a95877238" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.695015 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"972cae5e159f32657523e5994c8475d8de82cc180b3a8e9a74d4c60a95877238"} err="failed to get container status \"972cae5e159f32657523e5994c8475d8de82cc180b3a8e9a74d4c60a95877238\": rpc error: code = NotFound desc = could not find container \"972cae5e159f32657523e5994c8475d8de82cc180b3a8e9a74d4c60a95877238\": container with ID starting with 972cae5e159f32657523e5994c8475d8de82cc180b3a8e9a74d4c60a95877238 not found: ID does not exist" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.695051 4867 scope.go:117] "RemoveContainer" containerID="ecbff86946fb366e485d44a146ac2998664c52a1b60ad30dd4585b7cf70bfede" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.701605 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6c55469-3aa2-4471-932a-442ce56570a7" (UID: "b6c55469-3aa2-4471-932a-442ce56570a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:13 crc kubenswrapper[4867]: E0214 04:32:13.702107 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecbff86946fb366e485d44a146ac2998664c52a1b60ad30dd4585b7cf70bfede\": container with ID starting with ecbff86946fb366e485d44a146ac2998664c52a1b60ad30dd4585b7cf70bfede not found: ID does not exist" containerID="ecbff86946fb366e485d44a146ac2998664c52a1b60ad30dd4585b7cf70bfede" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.702465 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecbff86946fb366e485d44a146ac2998664c52a1b60ad30dd4585b7cf70bfede"} err="failed to get container status \"ecbff86946fb366e485d44a146ac2998664c52a1b60ad30dd4585b7cf70bfede\": rpc error: code = NotFound desc = could not find container \"ecbff86946fb366e485d44a146ac2998664c52a1b60ad30dd4585b7cf70bfede\": container with ID starting with ecbff86946fb366e485d44a146ac2998664c52a1b60ad30dd4585b7cf70bfede not found: ID does not exist" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.702726 4867 scope.go:117] "RemoveContainer" containerID="f445405ff2670ec25765e689c899369e6b86208982965111c8fd6b86edd2a3f9" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.731656 4867 scope.go:117] "RemoveContainer" containerID="df38319c35b43b20a57003cff86a29347a0b01099020f21394a48e3029dd9a34" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.740683 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-569c46898f-bbd5l"] Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.742607 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-config-data" (OuterVolumeSpecName: "config-data") pod "b6c55469-3aa2-4471-932a-442ce56570a7" (UID: "b6c55469-3aa2-4471-932a-442ce56570a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.770021 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-569c46898f-bbd5l"] Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.797067 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.797326 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6c55469-3aa2-4471-932a-442ce56570a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.935536 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.945518 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.958685 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 04:32:13 crc kubenswrapper[4867]: E0214 04:32:13.959143 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" containerName="neutron-api" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.959166 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" containerName="neutron-api" Feb 14 04:32:13 crc kubenswrapper[4867]: E0214 04:32:13.959190 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6c55469-3aa2-4471-932a-442ce56570a7" containerName="probe" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.959196 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6c55469-3aa2-4471-932a-442ce56570a7" containerName="probe" Feb 14 04:32:13 crc kubenswrapper[4867]: E0214 04:32:13.959211 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41682938-f603-460d-91e2-9de423799697" containerName="dnsmasq-dns" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.959217 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="41682938-f603-460d-91e2-9de423799697" containerName="dnsmasq-dns" Feb 14 04:32:13 crc kubenswrapper[4867]: E0214 04:32:13.959243 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" containerName="neutron-httpd" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.959249 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" containerName="neutron-httpd" Feb 14 04:32:13 crc kubenswrapper[4867]: E0214 04:32:13.959267 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41682938-f603-460d-91e2-9de423799697" containerName="init" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.959274 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="41682938-f603-460d-91e2-9de423799697" containerName="init" Feb 14 04:32:13 crc kubenswrapper[4867]: E0214 04:32:13.959290 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6c55469-3aa2-4471-932a-442ce56570a7" containerName="cinder-scheduler" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.959296 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6c55469-3aa2-4471-932a-442ce56570a7" containerName="cinder-scheduler" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.959487 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" containerName="neutron-api" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.959517 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6c55469-3aa2-4471-932a-442ce56570a7" containerName="probe" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.959530 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" containerName="neutron-httpd" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.959539 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="41682938-f603-460d-91e2-9de423799697" containerName="dnsmasq-dns" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.959552 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6c55469-3aa2-4471-932a-442ce56570a7" containerName="cinder-scheduler" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.960891 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.963831 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 14 04:32:13 crc kubenswrapper[4867]: I0214 04:32:13.986568 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.104998 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-config-data\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.105086 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.105390 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.105451 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-scripts\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.105541 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77zwv\" (UniqueName: \"kubernetes.io/projected/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-kube-api-access-77zwv\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.105593 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.209779 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-config-data\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.209914 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.210009 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.210069 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-scripts\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.210205 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77zwv\" (UniqueName: \"kubernetes.io/projected/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-kube-api-access-77zwv\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.210295 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.210963 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.216781 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-scripts\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.216862 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-config-data\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.218894 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.220240 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.242482 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77zwv\" (UniqueName: \"kubernetes.io/projected/38c903d9-50f6-418b-84d5-7ee82e9d1e2f-kube-api-access-77zwv\") pod \"cinder-scheduler-0\" (UID: \"38c903d9-50f6-418b-84d5-7ee82e9d1e2f\") " pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.278560 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 14 04:32:14 crc kubenswrapper[4867]: I0214 04:32:14.796297 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 14 04:32:15 crc kubenswrapper[4867]: I0214 04:32:15.012096 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d" path="/var/lib/kubelet/pods/8ed277cc-90dd-4cba-a4ac-3a9d0cee5e7d/volumes" Feb 14 04:32:15 crc kubenswrapper[4867]: I0214 04:32:15.013018 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6c55469-3aa2-4471-932a-442ce56570a7" path="/var/lib/kubelet/pods/b6c55469-3aa2-4471-932a-442ce56570a7/volumes" Feb 14 04:32:15 crc kubenswrapper[4867]: I0214 04:32:15.652209 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"38c903d9-50f6-418b-84d5-7ee82e9d1e2f","Type":"ContainerStarted","Data":"702bb86d1f52e378d22876224d381176ef1535b855223d432ee7fca7f6c8bd06"} Feb 14 04:32:15 crc kubenswrapper[4867]: I0214 04:32:15.652485 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"38c903d9-50f6-418b-84d5-7ee82e9d1e2f","Type":"ContainerStarted","Data":"b96089b9e38c0ea636878ef1bd934fcde069d5a09954a966d32d520181a11a44"} Feb 14 04:32:16 crc kubenswrapper[4867]: I0214 04:32:16.665944 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"38c903d9-50f6-418b-84d5-7ee82e9d1e2f","Type":"ContainerStarted","Data":"6720ffef72a95db4909acc117037c90ac9a391f6a23631323aba22f62f962e10"} Feb 14 04:32:16 crc kubenswrapper[4867]: I0214 04:32:16.691040 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.6910190529999998 podStartE2EDuration="3.691019053s" podCreationTimestamp="2026-02-14 04:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:16.686732488 +0000 UTC m=+1368.767669802" watchObservedRunningTime="2026-02-14 04:32:16.691019053 +0000 UTC m=+1368.771956387" Feb 14 04:32:17 crc kubenswrapper[4867]: I0214 04:32:17.846218 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:17 crc kubenswrapper[4867]: I0214 04:32:17.889871 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-584d8cfdf8-4lt8c" Feb 14 04:32:17 crc kubenswrapper[4867]: I0214 04:32:17.992523 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-78546bb898-l5722"] Feb 14 04:32:17 crc kubenswrapper[4867]: I0214 04:32:17.992790 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-78546bb898-l5722" podUID="3bf24394-6465-476f-a99e-f46fce318656" containerName="barbican-api-log" containerID="cri-o://3195bbd4ee7008fc50e7835b398535783b87d1f4092164f29b60b4bdc5b3c456" gracePeriod=30 Feb 14 04:32:17 crc kubenswrapper[4867]: I0214 04:32:17.993423 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-78546bb898-l5722" podUID="3bf24394-6465-476f-a99e-f46fce318656" containerName="barbican-api" containerID="cri-o://d7acae34b523e3a580609072a0335d9f4dc1a0643b2d2946b03ae70287735d81" gracePeriod=30 Feb 14 04:32:18 crc kubenswrapper[4867]: I0214 04:32:18.691218 4867 generic.go:334] "Generic (PLEG): container finished" podID="3bf24394-6465-476f-a99e-f46fce318656" containerID="3195bbd4ee7008fc50e7835b398535783b87d1f4092164f29b60b4bdc5b3c456" exitCode=143 Feb 14 04:32:18 crc kubenswrapper[4867]: I0214 04:32:18.691656 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78546bb898-l5722" event={"ID":"3bf24394-6465-476f-a99e-f46fce318656","Type":"ContainerDied","Data":"3195bbd4ee7008fc50e7835b398535783b87d1f4092164f29b60b4bdc5b3c456"} Feb 14 04:32:18 crc kubenswrapper[4867]: I0214 04:32:18.904354 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 14 04:32:19 crc kubenswrapper[4867]: I0214 04:32:19.279868 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 14 04:32:19 crc kubenswrapper[4867]: I0214 04:32:19.760084 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:32:19 crc kubenswrapper[4867]: I0214 04:32:19.761147 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:32:20 crc kubenswrapper[4867]: I0214 04:32:20.962615 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7595b47f77-vtg9d" Feb 14 04:32:21 crc kubenswrapper[4867]: I0214 04:32:21.417692 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78546bb898-l5722" podUID="3bf24394-6465-476f-a99e-f46fce318656" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.201:9311/healthcheck\": read tcp 10.217.0.2:35138->10.217.0.201:9311: read: connection reset by peer" Feb 14 04:32:21 crc kubenswrapper[4867]: I0214 04:32:21.417884 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-78546bb898-l5722" podUID="3bf24394-6465-476f-a99e-f46fce318656" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.201:9311/healthcheck\": read tcp 10.217.0.2:35132->10.217.0.201:9311: read: connection reset by peer" Feb 14 04:32:21 crc kubenswrapper[4867]: I0214 04:32:21.747764 4867 generic.go:334] "Generic (PLEG): container finished" podID="3bf24394-6465-476f-a99e-f46fce318656" containerID="d7acae34b523e3a580609072a0335d9f4dc1a0643b2d2946b03ae70287735d81" exitCode=0 Feb 14 04:32:21 crc kubenswrapper[4867]: I0214 04:32:21.747816 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78546bb898-l5722" event={"ID":"3bf24394-6465-476f-a99e-f46fce318656","Type":"ContainerDied","Data":"d7acae34b523e3a580609072a0335d9f4dc1a0643b2d2946b03ae70287735d81"} Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.088067 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.181975 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-config-data\") pod \"3bf24394-6465-476f-a99e-f46fce318656\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.182134 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bf24394-6465-476f-a99e-f46fce318656-logs\") pod \"3bf24394-6465-476f-a99e-f46fce318656\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.182208 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-combined-ca-bundle\") pod \"3bf24394-6465-476f-a99e-f46fce318656\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.182305 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-config-data-custom\") pod \"3bf24394-6465-476f-a99e-f46fce318656\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.182410 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bvxz\" (UniqueName: \"kubernetes.io/projected/3bf24394-6465-476f-a99e-f46fce318656-kube-api-access-2bvxz\") pod \"3bf24394-6465-476f-a99e-f46fce318656\" (UID: \"3bf24394-6465-476f-a99e-f46fce318656\") " Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.182821 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bf24394-6465-476f-a99e-f46fce318656-logs" (OuterVolumeSpecName: "logs") pod "3bf24394-6465-476f-a99e-f46fce318656" (UID: "3bf24394-6465-476f-a99e-f46fce318656"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.183191 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bf24394-6465-476f-a99e-f46fce318656-logs\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.192791 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3bf24394-6465-476f-a99e-f46fce318656" (UID: "3bf24394-6465-476f-a99e-f46fce318656"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.207819 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bf24394-6465-476f-a99e-f46fce318656-kube-api-access-2bvxz" (OuterVolumeSpecName: "kube-api-access-2bvxz") pod "3bf24394-6465-476f-a99e-f46fce318656" (UID: "3bf24394-6465-476f-a99e-f46fce318656"). InnerVolumeSpecName "kube-api-access-2bvxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.208317 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.210084 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-8574cd8bdd-r5cv6" Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.239831 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3bf24394-6465-476f-a99e-f46fce318656" (UID: "3bf24394-6465-476f-a99e-f46fce318656"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.284952 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bvxz\" (UniqueName: \"kubernetes.io/projected/3bf24394-6465-476f-a99e-f46fce318656-kube-api-access-2bvxz\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.296379 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.296886 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.285336 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-config-data" (OuterVolumeSpecName: "config-data") pod "3bf24394-6465-476f-a99e-f46fce318656" (UID: "3bf24394-6465-476f-a99e-f46fce318656"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.312356 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-74d7c6cb48-8wr7l"] Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.312626 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-74d7c6cb48-8wr7l" podUID="8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" containerName="placement-log" containerID="cri-o://e3dbb7ce8b1d62d84a2b156d530b4308c99b32ab7b60ee3156b3ed9b46908218" gracePeriod=30 Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.313074 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-74d7c6cb48-8wr7l" podUID="8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" containerName="placement-api" containerID="cri-o://95f9bf20e81b8ee8296887c27b1fc03c7aeba7ab6e8adc89f4de3b967b5b9c86" gracePeriod=30 Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.399920 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bf24394-6465-476f-a99e-f46fce318656-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.761042 4867 generic.go:334] "Generic (PLEG): container finished" podID="8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" containerID="e3dbb7ce8b1d62d84a2b156d530b4308c99b32ab7b60ee3156b3ed9b46908218" exitCode=143 Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.761160 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74d7c6cb48-8wr7l" event={"ID":"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2","Type":"ContainerDied","Data":"e3dbb7ce8b1d62d84a2b156d530b4308c99b32ab7b60ee3156b3ed9b46908218"} Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.764406 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-78546bb898-l5722" event={"ID":"3bf24394-6465-476f-a99e-f46fce318656","Type":"ContainerDied","Data":"8d85459a09b7155a3e119769eaeb23dbfd9aa893f907e0c55fc24cbd558bf78f"} Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.764476 4867 scope.go:117] "RemoveContainer" containerID="d7acae34b523e3a580609072a0335d9f4dc1a0643b2d2946b03ae70287735d81" Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.764431 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-78546bb898-l5722" Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.794635 4867 scope.go:117] "RemoveContainer" containerID="3195bbd4ee7008fc50e7835b398535783b87d1f4092164f29b60b4bdc5b3c456" Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.817267 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-78546bb898-l5722"] Feb 14 04:32:22 crc kubenswrapper[4867]: I0214 04:32:22.830487 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-78546bb898-l5722"] Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.011384 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bf24394-6465-476f-a99e-f46fce318656" path="/var/lib/kubelet/pods/3bf24394-6465-476f-a99e-f46fce318656/volumes" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.061468 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 14 04:32:23 crc kubenswrapper[4867]: E0214 04:32:23.062076 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf24394-6465-476f-a99e-f46fce318656" containerName="barbican-api-log" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.062103 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf24394-6465-476f-a99e-f46fce318656" containerName="barbican-api-log" Feb 14 04:32:23 crc kubenswrapper[4867]: E0214 04:32:23.062144 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf24394-6465-476f-a99e-f46fce318656" containerName="barbican-api" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.062154 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf24394-6465-476f-a99e-f46fce318656" containerName="barbican-api" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.062479 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bf24394-6465-476f-a99e-f46fce318656" containerName="barbican-api" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.062556 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bf24394-6465-476f-a99e-f46fce318656" containerName="barbican-api-log" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.063532 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.065837 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.065837 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-th9bg" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.071896 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.088438 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.218170 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25trh\" (UniqueName: \"kubernetes.io/projected/6fdee887-8ecb-4c1e-8a88-0284fc050f0e-kube-api-access-25trh\") pod \"openstackclient\" (UID: \"6fdee887-8ecb-4c1e-8a88-0284fc050f0e\") " pod="openstack/openstackclient" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.218240 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6fdee887-8ecb-4c1e-8a88-0284fc050f0e-openstack-config-secret\") pod \"openstackclient\" (UID: \"6fdee887-8ecb-4c1e-8a88-0284fc050f0e\") " pod="openstack/openstackclient" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.218260 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6fdee887-8ecb-4c1e-8a88-0284fc050f0e-openstack-config\") pod \"openstackclient\" (UID: \"6fdee887-8ecb-4c1e-8a88-0284fc050f0e\") " pod="openstack/openstackclient" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.218466 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fdee887-8ecb-4c1e-8a88-0284fc050f0e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6fdee887-8ecb-4c1e-8a88-0284fc050f0e\") " pod="openstack/openstackclient" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.321107 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25trh\" (UniqueName: \"kubernetes.io/projected/6fdee887-8ecb-4c1e-8a88-0284fc050f0e-kube-api-access-25trh\") pod \"openstackclient\" (UID: \"6fdee887-8ecb-4c1e-8a88-0284fc050f0e\") " pod="openstack/openstackclient" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.321186 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6fdee887-8ecb-4c1e-8a88-0284fc050f0e-openstack-config-secret\") pod \"openstackclient\" (UID: \"6fdee887-8ecb-4c1e-8a88-0284fc050f0e\") " pod="openstack/openstackclient" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.321204 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6fdee887-8ecb-4c1e-8a88-0284fc050f0e-openstack-config\") pod \"openstackclient\" (UID: \"6fdee887-8ecb-4c1e-8a88-0284fc050f0e\") " pod="openstack/openstackclient" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.321243 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fdee887-8ecb-4c1e-8a88-0284fc050f0e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6fdee887-8ecb-4c1e-8a88-0284fc050f0e\") " pod="openstack/openstackclient" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.323166 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/6fdee887-8ecb-4c1e-8a88-0284fc050f0e-openstack-config\") pod \"openstackclient\" (UID: \"6fdee887-8ecb-4c1e-8a88-0284fc050f0e\") " pod="openstack/openstackclient" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.325541 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fdee887-8ecb-4c1e-8a88-0284fc050f0e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"6fdee887-8ecb-4c1e-8a88-0284fc050f0e\") " pod="openstack/openstackclient" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.325879 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/6fdee887-8ecb-4c1e-8a88-0284fc050f0e-openstack-config-secret\") pod \"openstackclient\" (UID: \"6fdee887-8ecb-4c1e-8a88-0284fc050f0e\") " pod="openstack/openstackclient" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.339306 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25trh\" (UniqueName: \"kubernetes.io/projected/6fdee887-8ecb-4c1e-8a88-0284fc050f0e-kube-api-access-25trh\") pod \"openstackclient\" (UID: \"6fdee887-8ecb-4c1e-8a88-0284fc050f0e\") " pod="openstack/openstackclient" Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.382659 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 14 04:32:23 crc kubenswrapper[4867]: W0214 04:32:23.870300 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fdee887_8ecb_4c1e_8a88_0284fc050f0e.slice/crio-73a29c963a8e17fd430e1895debed06ddaa001f6eafd4d2fdd31bcc1d7d2e132 WatchSource:0}: Error finding container 73a29c963a8e17fd430e1895debed06ddaa001f6eafd4d2fdd31bcc1d7d2e132: Status 404 returned error can't find the container with id 73a29c963a8e17fd430e1895debed06ddaa001f6eafd4d2fdd31bcc1d7d2e132 Feb 14 04:32:23 crc kubenswrapper[4867]: I0214 04:32:23.874691 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 14 04:32:24 crc kubenswrapper[4867]: I0214 04:32:24.535959 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 14 04:32:24 crc kubenswrapper[4867]: I0214 04:32:24.791716 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6fdee887-8ecb-4c1e-8a88-0284fc050f0e","Type":"ContainerStarted","Data":"73a29c963a8e17fd430e1895debed06ddaa001f6eafd4d2fdd31bcc1d7d2e132"} Feb 14 04:32:25 crc kubenswrapper[4867]: I0214 04:32:25.808308 4867 generic.go:334] "Generic (PLEG): container finished" podID="8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" containerID="95f9bf20e81b8ee8296887c27b1fc03c7aeba7ab6e8adc89f4de3b967b5b9c86" exitCode=0 Feb 14 04:32:25 crc kubenswrapper[4867]: I0214 04:32:25.808686 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74d7c6cb48-8wr7l" event={"ID":"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2","Type":"ContainerDied","Data":"95f9bf20e81b8ee8296887c27b1fc03c7aeba7ab6e8adc89f4de3b967b5b9c86"} Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.132606 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.311233 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-internal-tls-certs\") pod \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.311448 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-combined-ca-bundle\") pod \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.311668 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzmv2\" (UniqueName: \"kubernetes.io/projected/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-kube-api-access-nzmv2\") pod \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.311784 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-config-data\") pod \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.311871 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-public-tls-certs\") pod \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.311927 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-logs\") pod \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.311960 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-scripts\") pod \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\" (UID: \"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2\") " Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.314370 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-logs" (OuterVolumeSpecName: "logs") pod "8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" (UID: "8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.319989 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-scripts" (OuterVolumeSpecName: "scripts") pod "8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" (UID: "8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.320523 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-kube-api-access-nzmv2" (OuterVolumeSpecName: "kube-api-access-nzmv2") pod "8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" (UID: "8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2"). InnerVolumeSpecName "kube-api-access-nzmv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.401809 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" (UID: "8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.412831 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-config-data" (OuterVolumeSpecName: "config-data") pod "8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" (UID: "8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.416085 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.416268 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzmv2\" (UniqueName: \"kubernetes.io/projected/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-kube-api-access-nzmv2\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.416381 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.416486 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-logs\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.416591 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.475232 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" (UID: "8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.487766 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" (UID: "8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.519491 4867 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.519540 4867 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.824493 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-74d7c6cb48-8wr7l" event={"ID":"8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2","Type":"ContainerDied","Data":"d72d747bf641f17caffe57b13805170a59917becd98a04f814a50119c9f846ba"} Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.824600 4867 scope.go:117] "RemoveContainer" containerID="95f9bf20e81b8ee8296887c27b1fc03c7aeba7ab6e8adc89f4de3b967b5b9c86" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.824667 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-74d7c6cb48-8wr7l" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.856651 4867 scope.go:117] "RemoveContainer" containerID="e3dbb7ce8b1d62d84a2b156d530b4308c99b32ab7b60ee3156b3ed9b46908218" Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.870629 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-74d7c6cb48-8wr7l"] Feb 14 04:32:26 crc kubenswrapper[4867]: I0214 04:32:26.887474 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-74d7c6cb48-8wr7l"] Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.014594 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" path="/var/lib/kubelet/pods/8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2/volumes" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.155857 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5559ff585f-sb7wb"] Feb 14 04:32:27 crc kubenswrapper[4867]: E0214 04:32:27.156427 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" containerName="placement-log" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.156445 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" containerName="placement-log" Feb 14 04:32:27 crc kubenswrapper[4867]: E0214 04:32:27.156487 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" containerName="placement-api" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.156495 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" containerName="placement-api" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.156758 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" containerName="placement-api" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.156971 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f2a35ad-6f6f-4d6a-b4eb-44b2c2a661f2" containerName="placement-log" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.158230 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.162365 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.162601 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.162729 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.182429 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5559ff585f-sb7wb"] Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.238541 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-combined-ca-bundle\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.238606 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-etc-swift\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.238641 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-run-httpd\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.238685 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-log-httpd\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.238717 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-internal-tls-certs\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.239821 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vjtl\" (UniqueName: \"kubernetes.io/projected/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-kube-api-access-9vjtl\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.240219 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-public-tls-certs\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.240293 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-config-data\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.346420 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vjtl\" (UniqueName: \"kubernetes.io/projected/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-kube-api-access-9vjtl\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.346806 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-public-tls-certs\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.346839 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-config-data\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.346916 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-combined-ca-bundle\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.346936 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-etc-swift\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.346954 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-run-httpd\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.346986 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-log-httpd\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.347004 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-internal-tls-certs\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.348027 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-run-httpd\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.349144 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-log-httpd\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.351326 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-config-data\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.353337 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-internal-tls-certs\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.353802 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-public-tls-certs\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.353918 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-etc-swift\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.354566 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-combined-ca-bundle\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.364408 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vjtl\" (UniqueName: \"kubernetes.io/projected/76fdab94-9bfb-48b7-82f9-bdd6d2258cdb-kube-api-access-9vjtl\") pod \"swift-proxy-5559ff585f-sb7wb\" (UID: \"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb\") " pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:27 crc kubenswrapper[4867]: I0214 04:32:27.487399 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:28 crc kubenswrapper[4867]: W0214 04:32:28.123896 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76fdab94_9bfb_48b7_82f9_bdd6d2258cdb.slice/crio-b4a8ff39ae65f8b71af03f89aaa9768f336d409db08fd8c3fc67bdc9a1d89233 WatchSource:0}: Error finding container b4a8ff39ae65f8b71af03f89aaa9768f336d409db08fd8c3fc67bdc9a1d89233: Status 404 returned error can't find the container with id b4a8ff39ae65f8b71af03f89aaa9768f336d409db08fd8c3fc67bdc9a1d89233 Feb 14 04:32:28 crc kubenswrapper[4867]: I0214 04:32:28.139910 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5559ff585f-sb7wb"] Feb 14 04:32:28 crc kubenswrapper[4867]: I0214 04:32:28.851741 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5559ff585f-sb7wb" event={"ID":"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb","Type":"ContainerStarted","Data":"b59bd80307cb38da657610fdfea874e3ba1d1dada932f211c3e5710d88178369"} Feb 14 04:32:28 crc kubenswrapper[4867]: I0214 04:32:28.851785 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5559ff585f-sb7wb" event={"ID":"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb","Type":"ContainerStarted","Data":"338afbbd6ca87f6d2a8404cb72131a62bf136cc006b50cb5ceea030a6fa1583b"} Feb 14 04:32:28 crc kubenswrapper[4867]: I0214 04:32:28.851800 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5559ff585f-sb7wb" event={"ID":"76fdab94-9bfb-48b7-82f9-bdd6d2258cdb","Type":"ContainerStarted","Data":"b4a8ff39ae65f8b71af03f89aaa9768f336d409db08fd8c3fc67bdc9a1d89233"} Feb 14 04:32:28 crc kubenswrapper[4867]: I0214 04:32:28.853234 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:28 crc kubenswrapper[4867]: I0214 04:32:28.853260 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:28 crc kubenswrapper[4867]: I0214 04:32:28.857999 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:28 crc kubenswrapper[4867]: I0214 04:32:28.858293 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="ceilometer-central-agent" containerID="cri-o://fa147253ee7488f81ea6eca1453e9afe783991b356d4806c11a9a0f690b9282a" gracePeriod=30 Feb 14 04:32:28 crc kubenswrapper[4867]: I0214 04:32:28.858717 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="sg-core" containerID="cri-o://48f01dc9aa282450371f6297a6c143b96aef3bdcad1b711eb94a51bfc381c6b0" gracePeriod=30 Feb 14 04:32:28 crc kubenswrapper[4867]: I0214 04:32:28.858863 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="proxy-httpd" containerID="cri-o://fbab9809e65a478959fcc20b95a52910111448975d370afd8952ef2712282827" gracePeriod=30 Feb 14 04:32:28 crc kubenswrapper[4867]: I0214 04:32:28.858906 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="ceilometer-notification-agent" containerID="cri-o://fe5aa9c47c46abdc1b30cca0eb25c76a83c0676a5128f68950adc248471821b2" gracePeriod=30 Feb 14 04:32:28 crc kubenswrapper[4867]: I0214 04:32:28.869343 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.205:3000/\": EOF" Feb 14 04:32:28 crc kubenswrapper[4867]: I0214 04:32:28.879086 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5559ff585f-sb7wb" podStartSLOduration=1.879064981 podStartE2EDuration="1.879064981s" podCreationTimestamp="2026-02-14 04:32:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:28.878603058 +0000 UTC m=+1380.959540372" watchObservedRunningTime="2026-02-14 04:32:28.879064981 +0000 UTC m=+1380.960002295" Feb 14 04:32:29 crc kubenswrapper[4867]: I0214 04:32:29.866061 4867 generic.go:334] "Generic (PLEG): container finished" podID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerID="fbab9809e65a478959fcc20b95a52910111448975d370afd8952ef2712282827" exitCode=0 Feb 14 04:32:29 crc kubenswrapper[4867]: I0214 04:32:29.866432 4867 generic.go:334] "Generic (PLEG): container finished" podID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerID="48f01dc9aa282450371f6297a6c143b96aef3bdcad1b711eb94a51bfc381c6b0" exitCode=2 Feb 14 04:32:29 crc kubenswrapper[4867]: I0214 04:32:29.866448 4867 generic.go:334] "Generic (PLEG): container finished" podID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerID="fa147253ee7488f81ea6eca1453e9afe783991b356d4806c11a9a0f690b9282a" exitCode=0 Feb 14 04:32:29 crc kubenswrapper[4867]: I0214 04:32:29.866142 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ce36665-fb1a-4860-bc8a-5e12431d4cd6","Type":"ContainerDied","Data":"fbab9809e65a478959fcc20b95a52910111448975d370afd8952ef2712282827"} Feb 14 04:32:29 crc kubenswrapper[4867]: I0214 04:32:29.866637 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ce36665-fb1a-4860-bc8a-5e12431d4cd6","Type":"ContainerDied","Data":"48f01dc9aa282450371f6297a6c143b96aef3bdcad1b711eb94a51bfc381c6b0"} Feb 14 04:32:29 crc kubenswrapper[4867]: I0214 04:32:29.866657 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ce36665-fb1a-4860-bc8a-5e12431d4cd6","Type":"ContainerDied","Data":"fa147253ee7488f81ea6eca1453e9afe783991b356d4806c11a9a0f690b9282a"} Feb 14 04:32:31 crc kubenswrapper[4867]: I0214 04:32:31.251010 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:32:31 crc kubenswrapper[4867]: I0214 04:32:31.251383 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:32:31 crc kubenswrapper[4867]: I0214 04:32:31.251441 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:32:31 crc kubenswrapper[4867]: I0214 04:32:31.252409 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9c4b967cf6b24751f9f07fc3f33e355390aef9adbb8efd8f22637fd0bfe6c0be"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 04:32:31 crc kubenswrapper[4867]: I0214 04:32:31.252487 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://9c4b967cf6b24751f9f07fc3f33e355390aef9adbb8efd8f22637fd0bfe6c0be" gracePeriod=600 Feb 14 04:32:31 crc kubenswrapper[4867]: I0214 04:32:31.899963 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="9c4b967cf6b24751f9f07fc3f33e355390aef9adbb8efd8f22637fd0bfe6c0be" exitCode=0 Feb 14 04:32:31 crc kubenswrapper[4867]: I0214 04:32:31.900008 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"9c4b967cf6b24751f9f07fc3f33e355390aef9adbb8efd8f22637fd0bfe6c0be"} Feb 14 04:32:31 crc kubenswrapper[4867]: I0214 04:32:31.900040 4867 scope.go:117] "RemoveContainer" containerID="a6dbe719cdc073fcc8481a2727f00815982a8bd61b2cd10d4229a11b7b5cb46c" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.052124 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-677c4ffcdf-n44s6"] Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.055200 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.060938 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.061016 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.061450 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-pzjfh" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.114570 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-677c4ffcdf-n44s6"] Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.216283 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-ccbrl"] Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.219190 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.219562 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-combined-ca-bundle\") pod \"heat-engine-677c4ffcdf-n44s6\" (UID: \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\") " pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.219621 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-config-data\") pod \"heat-engine-677c4ffcdf-n44s6\" (UID: \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\") " pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.219753 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgmf8\" (UniqueName: \"kubernetes.io/projected/a2ce3fe5-1f15-484b-a608-da9f03d714c9-kube-api-access-lgmf8\") pod \"heat-engine-677c4ffcdf-n44s6\" (UID: \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\") " pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.219820 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-config-data-custom\") pod \"heat-engine-677c4ffcdf-n44s6\" (UID: \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\") " pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.253078 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-ccbrl"] Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.317716 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-667b98697-gxqph"] Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.319501 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.321546 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.321893 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.321989 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.322060 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bm4h\" (UniqueName: \"kubernetes.io/projected/7959a0fa-00bd-492c-9892-a8c8727549c6-kube-api-access-5bm4h\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.322115 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-combined-ca-bundle\") pod \"heat-engine-677c4ffcdf-n44s6\" (UID: \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\") " pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.322153 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-config-data\") pod \"heat-engine-677c4ffcdf-n44s6\" (UID: \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\") " pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.322205 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-config\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.322258 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgmf8\" (UniqueName: \"kubernetes.io/projected/a2ce3fe5-1f15-484b-a608-da9f03d714c9-kube-api-access-lgmf8\") pod \"heat-engine-677c4ffcdf-n44s6\" (UID: \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\") " pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.322281 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.322322 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-config-data-custom\") pod \"heat-engine-677c4ffcdf-n44s6\" (UID: \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\") " pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.322351 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.333109 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-667b98697-gxqph"] Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.350910 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-combined-ca-bundle\") pod \"heat-engine-677c4ffcdf-n44s6\" (UID: \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\") " pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.351341 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-config-data-custom\") pod \"heat-engine-677c4ffcdf-n44s6\" (UID: \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\") " pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.352023 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-config-data\") pod \"heat-engine-677c4ffcdf-n44s6\" (UID: \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\") " pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.356817 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgmf8\" (UniqueName: \"kubernetes.io/projected/a2ce3fe5-1f15-484b-a608-da9f03d714c9-kube-api-access-lgmf8\") pod \"heat-engine-677c4ffcdf-n44s6\" (UID: \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\") " pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.380355 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-74c87bfcc9-g5dr4"] Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.382978 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.390826 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.399662 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.404675 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-74c87bfcc9-g5dr4"] Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.424913 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bm4h\" (UniqueName: \"kubernetes.io/projected/7959a0fa-00bd-492c-9892-a8c8727549c6-kube-api-access-5bm4h\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.424977 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-combined-ca-bundle\") pod \"heat-api-667b98697-gxqph\" (UID: \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\") " pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.425073 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-config\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.425112 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.425147 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.425187 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrk9t\" (UniqueName: \"kubernetes.io/projected/4fd29ee2-33af-4629-8c0d-fa62c0e07240-kube-api-access-hrk9t\") pod \"heat-api-667b98697-gxqph\" (UID: \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\") " pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.425227 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.425262 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-config-data\") pod \"heat-api-667b98697-gxqph\" (UID: \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\") " pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.425281 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-config-data-custom\") pod \"heat-api-667b98697-gxqph\" (UID: \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\") " pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.425308 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.426940 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-config\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.427185 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.427583 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.427670 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.429141 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.445542 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bm4h\" (UniqueName: \"kubernetes.io/projected/7959a0fa-00bd-492c-9892-a8c8727549c6-kube-api-access-5bm4h\") pod \"dnsmasq-dns-7756b9d78c-ccbrl\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.527251 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-combined-ca-bundle\") pod \"heat-cfnapi-74c87bfcc9-g5dr4\" (UID: \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\") " pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.527578 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-config-data\") pod \"heat-cfnapi-74c87bfcc9-g5dr4\" (UID: \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\") " pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.527702 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-config-data-custom\") pod \"heat-cfnapi-74c87bfcc9-g5dr4\" (UID: \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\") " pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.527862 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfcz2\" (UniqueName: \"kubernetes.io/projected/6c28a361-2a59-45f2-baeb-e4d5313b6c17-kube-api-access-tfcz2\") pod \"heat-cfnapi-74c87bfcc9-g5dr4\" (UID: \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\") " pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.527992 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrk9t\" (UniqueName: \"kubernetes.io/projected/4fd29ee2-33af-4629-8c0d-fa62c0e07240-kube-api-access-hrk9t\") pod \"heat-api-667b98697-gxqph\" (UID: \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\") " pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.528146 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-config-data\") pod \"heat-api-667b98697-gxqph\" (UID: \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\") " pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.528244 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-config-data-custom\") pod \"heat-api-667b98697-gxqph\" (UID: \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\") " pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.528366 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-combined-ca-bundle\") pod \"heat-api-667b98697-gxqph\" (UID: \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\") " pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.532452 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-combined-ca-bundle\") pod \"heat-api-667b98697-gxqph\" (UID: \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\") " pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.534151 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-config-data\") pod \"heat-api-667b98697-gxqph\" (UID: \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\") " pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.534949 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-config-data-custom\") pod \"heat-api-667b98697-gxqph\" (UID: \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\") " pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.547869 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrk9t\" (UniqueName: \"kubernetes.io/projected/4fd29ee2-33af-4629-8c0d-fa62c0e07240-kube-api-access-hrk9t\") pod \"heat-api-667b98697-gxqph\" (UID: \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\") " pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.553171 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.630355 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-combined-ca-bundle\") pod \"heat-cfnapi-74c87bfcc9-g5dr4\" (UID: \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\") " pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.630418 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-config-data\") pod \"heat-cfnapi-74c87bfcc9-g5dr4\" (UID: \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\") " pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.630453 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-config-data-custom\") pod \"heat-cfnapi-74c87bfcc9-g5dr4\" (UID: \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\") " pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.630489 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfcz2\" (UniqueName: \"kubernetes.io/projected/6c28a361-2a59-45f2-baeb-e4d5313b6c17-kube-api-access-tfcz2\") pod \"heat-cfnapi-74c87bfcc9-g5dr4\" (UID: \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\") " pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.634942 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-config-data-custom\") pod \"heat-cfnapi-74c87bfcc9-g5dr4\" (UID: \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\") " pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.635128 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-config-data\") pod \"heat-cfnapi-74c87bfcc9-g5dr4\" (UID: \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\") " pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.635542 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-combined-ca-bundle\") pod \"heat-cfnapi-74c87bfcc9-g5dr4\" (UID: \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\") " pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.658401 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfcz2\" (UniqueName: \"kubernetes.io/projected/6c28a361-2a59-45f2-baeb-e4d5313b6c17-kube-api-access-tfcz2\") pod \"heat-cfnapi-74c87bfcc9-g5dr4\" (UID: \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\") " pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.797859 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.808103 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.921853 4867 generic.go:334] "Generic (PLEG): container finished" podID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerID="fe5aa9c47c46abdc1b30cca0eb25c76a83c0676a5128f68950adc248471821b2" exitCode=0 Feb 14 04:32:32 crc kubenswrapper[4867]: I0214 04:32:32.921902 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ce36665-fb1a-4860-bc8a-5e12431d4cd6","Type":"ContainerDied","Data":"fe5aa9c47c46abdc1b30cca0eb25c76a83c0676a5128f68950adc248471821b2"} Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.528462 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-5ffts"] Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.534377 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-5ffts" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.591119 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-5ffts"] Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.634212 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-t8trt"] Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.637547 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-t8trt" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.652130 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-t8trt"] Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.676659 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/289f81c2-9092-4a51-a1b4-8eedaa09aedb-operator-scripts\") pod \"nova-api-db-create-5ffts\" (UID: \"289f81c2-9092-4a51-a1b4-8eedaa09aedb\") " pod="openstack/nova-api-db-create-5ffts" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.676972 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfw6n\" (UniqueName: \"kubernetes.io/projected/289f81c2-9092-4a51-a1b4-8eedaa09aedb-kube-api-access-pfw6n\") pod \"nova-api-db-create-5ffts\" (UID: \"289f81c2-9092-4a51-a1b4-8eedaa09aedb\") " pod="openstack/nova-api-db-create-5ffts" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.714142 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-slfhr"] Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.715942 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-slfhr" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.760611 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-a338-account-create-update-2zjhb"] Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.762989 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a338-account-create-update-2zjhb" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.766872 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.778957 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfw6n\" (UniqueName: \"kubernetes.io/projected/289f81c2-9092-4a51-a1b4-8eedaa09aedb-kube-api-access-pfw6n\") pod \"nova-api-db-create-5ffts\" (UID: \"289f81c2-9092-4a51-a1b4-8eedaa09aedb\") " pod="openstack/nova-api-db-create-5ffts" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.779061 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/708fbc3f-a05a-4b29-b455-32db117495d1-operator-scripts\") pod \"nova-cell0-db-create-t8trt\" (UID: \"708fbc3f-a05a-4b29-b455-32db117495d1\") " pod="openstack/nova-cell0-db-create-t8trt" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.779106 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5ch7\" (UniqueName: \"kubernetes.io/projected/708fbc3f-a05a-4b29-b455-32db117495d1-kube-api-access-k5ch7\") pod \"nova-cell0-db-create-t8trt\" (UID: \"708fbc3f-a05a-4b29-b455-32db117495d1\") " pod="openstack/nova-cell0-db-create-t8trt" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.779149 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/289f81c2-9092-4a51-a1b4-8eedaa09aedb-operator-scripts\") pod \"nova-api-db-create-5ffts\" (UID: \"289f81c2-9092-4a51-a1b4-8eedaa09aedb\") " pod="openstack/nova-api-db-create-5ffts" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.780052 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/289f81c2-9092-4a51-a1b4-8eedaa09aedb-operator-scripts\") pod \"nova-api-db-create-5ffts\" (UID: \"289f81c2-9092-4a51-a1b4-8eedaa09aedb\") " pod="openstack/nova-api-db-create-5ffts" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.791580 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-slfhr"] Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.807789 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a338-account-create-update-2zjhb"] Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.816191 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfw6n\" (UniqueName: \"kubernetes.io/projected/289f81c2-9092-4a51-a1b4-8eedaa09aedb-kube-api-access-pfw6n\") pod \"nova-api-db-create-5ffts\" (UID: \"289f81c2-9092-4a51-a1b4-8eedaa09aedb\") " pod="openstack/nova-api-db-create-5ffts" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.881882 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/708fbc3f-a05a-4b29-b455-32db117495d1-operator-scripts\") pod \"nova-cell0-db-create-t8trt\" (UID: \"708fbc3f-a05a-4b29-b455-32db117495d1\") " pod="openstack/nova-cell0-db-create-t8trt" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.881963 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5ch7\" (UniqueName: \"kubernetes.io/projected/708fbc3f-a05a-4b29-b455-32db117495d1-kube-api-access-k5ch7\") pod \"nova-cell0-db-create-t8trt\" (UID: \"708fbc3f-a05a-4b29-b455-32db117495d1\") " pod="openstack/nova-cell0-db-create-t8trt" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.882026 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c287t\" (UniqueName: \"kubernetes.io/projected/730dbd9b-ddff-4d09-89ff-b9135ed83042-kube-api-access-c287t\") pod \"nova-cell1-db-create-slfhr\" (UID: \"730dbd9b-ddff-4d09-89ff-b9135ed83042\") " pod="openstack/nova-cell1-db-create-slfhr" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.882054 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/730dbd9b-ddff-4d09-89ff-b9135ed83042-operator-scripts\") pod \"nova-cell1-db-create-slfhr\" (UID: \"730dbd9b-ddff-4d09-89ff-b9135ed83042\") " pod="openstack/nova-cell1-db-create-slfhr" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.882132 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041c55d6-87c7-47b4-a53b-9b38cb85e3d2-operator-scripts\") pod \"nova-api-a338-account-create-update-2zjhb\" (UID: \"041c55d6-87c7-47b4-a53b-9b38cb85e3d2\") " pod="openstack/nova-api-a338-account-create-update-2zjhb" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.882160 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v628v\" (UniqueName: \"kubernetes.io/projected/041c55d6-87c7-47b4-a53b-9b38cb85e3d2-kube-api-access-v628v\") pod \"nova-api-a338-account-create-update-2zjhb\" (UID: \"041c55d6-87c7-47b4-a53b-9b38cb85e3d2\") " pod="openstack/nova-api-a338-account-create-update-2zjhb" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.882706 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/708fbc3f-a05a-4b29-b455-32db117495d1-operator-scripts\") pod \"nova-cell0-db-create-t8trt\" (UID: \"708fbc3f-a05a-4b29-b455-32db117495d1\") " pod="openstack/nova-cell0-db-create-t8trt" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.888956 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-8094-account-create-update-pbbgl"] Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.890620 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8094-account-create-update-pbbgl" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.892944 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.898143 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-5ffts" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.907771 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-8094-account-create-update-pbbgl"] Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.928825 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5ch7\" (UniqueName: \"kubernetes.io/projected/708fbc3f-a05a-4b29-b455-32db117495d1-kube-api-access-k5ch7\") pod \"nova-cell0-db-create-t8trt\" (UID: \"708fbc3f-a05a-4b29-b455-32db117495d1\") " pod="openstack/nova-cell0-db-create-t8trt" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.981443 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-t8trt" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.985347 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041c55d6-87c7-47b4-a53b-9b38cb85e3d2-operator-scripts\") pod \"nova-api-a338-account-create-update-2zjhb\" (UID: \"041c55d6-87c7-47b4-a53b-9b38cb85e3d2\") " pod="openstack/nova-api-a338-account-create-update-2zjhb" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.985406 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v628v\" (UniqueName: \"kubernetes.io/projected/041c55d6-87c7-47b4-a53b-9b38cb85e3d2-kube-api-access-v628v\") pod \"nova-api-a338-account-create-update-2zjhb\" (UID: \"041c55d6-87c7-47b4-a53b-9b38cb85e3d2\") " pod="openstack/nova-api-a338-account-create-update-2zjhb" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.986295 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041c55d6-87c7-47b4-a53b-9b38cb85e3d2-operator-scripts\") pod \"nova-api-a338-account-create-update-2zjhb\" (UID: \"041c55d6-87c7-47b4-a53b-9b38cb85e3d2\") " pod="openstack/nova-api-a338-account-create-update-2zjhb" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.985532 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80c71d92-a9d1-4256-b7be-678dc34d1562-operator-scripts\") pod \"nova-cell0-8094-account-create-update-pbbgl\" (UID: \"80c71d92-a9d1-4256-b7be-678dc34d1562\") " pod="openstack/nova-cell0-8094-account-create-update-pbbgl" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.987330 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnn7m\" (UniqueName: \"kubernetes.io/projected/80c71d92-a9d1-4256-b7be-678dc34d1562-kube-api-access-fnn7m\") pod \"nova-cell0-8094-account-create-update-pbbgl\" (UID: \"80c71d92-a9d1-4256-b7be-678dc34d1562\") " pod="openstack/nova-cell0-8094-account-create-update-pbbgl" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.987370 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c287t\" (UniqueName: \"kubernetes.io/projected/730dbd9b-ddff-4d09-89ff-b9135ed83042-kube-api-access-c287t\") pod \"nova-cell1-db-create-slfhr\" (UID: \"730dbd9b-ddff-4d09-89ff-b9135ed83042\") " pod="openstack/nova-cell1-db-create-slfhr" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.987397 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/730dbd9b-ddff-4d09-89ff-b9135ed83042-operator-scripts\") pod \"nova-cell1-db-create-slfhr\" (UID: \"730dbd9b-ddff-4d09-89ff-b9135ed83042\") " pod="openstack/nova-cell1-db-create-slfhr" Feb 14 04:32:33 crc kubenswrapper[4867]: I0214 04:32:33.988300 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/730dbd9b-ddff-4d09-89ff-b9135ed83042-operator-scripts\") pod \"nova-cell1-db-create-slfhr\" (UID: \"730dbd9b-ddff-4d09-89ff-b9135ed83042\") " pod="openstack/nova-cell1-db-create-slfhr" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.031256 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c287t\" (UniqueName: \"kubernetes.io/projected/730dbd9b-ddff-4d09-89ff-b9135ed83042-kube-api-access-c287t\") pod \"nova-cell1-db-create-slfhr\" (UID: \"730dbd9b-ddff-4d09-89ff-b9135ed83042\") " pod="openstack/nova-cell1-db-create-slfhr" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.035220 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v628v\" (UniqueName: \"kubernetes.io/projected/041c55d6-87c7-47b4-a53b-9b38cb85e3d2-kube-api-access-v628v\") pod \"nova-api-a338-account-create-update-2zjhb\" (UID: \"041c55d6-87c7-47b4-a53b-9b38cb85e3d2\") " pod="openstack/nova-api-a338-account-create-update-2zjhb" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.049718 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-slfhr" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.089481 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80c71d92-a9d1-4256-b7be-678dc34d1562-operator-scripts\") pod \"nova-cell0-8094-account-create-update-pbbgl\" (UID: \"80c71d92-a9d1-4256-b7be-678dc34d1562\") " pod="openstack/nova-cell0-8094-account-create-update-pbbgl" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.099920 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnn7m\" (UniqueName: \"kubernetes.io/projected/80c71d92-a9d1-4256-b7be-678dc34d1562-kube-api-access-fnn7m\") pod \"nova-cell0-8094-account-create-update-pbbgl\" (UID: \"80c71d92-a9d1-4256-b7be-678dc34d1562\") " pod="openstack/nova-cell0-8094-account-create-update-pbbgl" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.099459 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a338-account-create-update-2zjhb" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.090693 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80c71d92-a9d1-4256-b7be-678dc34d1562-operator-scripts\") pod \"nova-cell0-8094-account-create-update-pbbgl\" (UID: \"80c71d92-a9d1-4256-b7be-678dc34d1562\") " pod="openstack/nova-cell0-8094-account-create-update-pbbgl" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.145155 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-8539-account-create-update-9j9p8"] Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.146043 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnn7m\" (UniqueName: \"kubernetes.io/projected/80c71d92-a9d1-4256-b7be-678dc34d1562-kube-api-access-fnn7m\") pod \"nova-cell0-8094-account-create-update-pbbgl\" (UID: \"80c71d92-a9d1-4256-b7be-678dc34d1562\") " pod="openstack/nova-cell0-8094-account-create-update-pbbgl" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.177232 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8539-account-create-update-9j9p8" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.180321 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.192669 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8539-account-create-update-9j9p8"] Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.278475 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8094-account-create-update-pbbgl" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.305467 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b7729cf-7332-4432-999f-fbee997b2201-operator-scripts\") pod \"nova-cell1-8539-account-create-update-9j9p8\" (UID: \"2b7729cf-7332-4432-999f-fbee997b2201\") " pod="openstack/nova-cell1-8539-account-create-update-9j9p8" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.305602 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbm8h\" (UniqueName: \"kubernetes.io/projected/2b7729cf-7332-4432-999f-fbee997b2201-kube-api-access-bbm8h\") pod \"nova-cell1-8539-account-create-update-9j9p8\" (UID: \"2b7729cf-7332-4432-999f-fbee997b2201\") " pod="openstack/nova-cell1-8539-account-create-update-9j9p8" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.409292 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b7729cf-7332-4432-999f-fbee997b2201-operator-scripts\") pod \"nova-cell1-8539-account-create-update-9j9p8\" (UID: \"2b7729cf-7332-4432-999f-fbee997b2201\") " pod="openstack/nova-cell1-8539-account-create-update-9j9p8" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.409451 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbm8h\" (UniqueName: \"kubernetes.io/projected/2b7729cf-7332-4432-999f-fbee997b2201-kube-api-access-bbm8h\") pod \"nova-cell1-8539-account-create-update-9j9p8\" (UID: \"2b7729cf-7332-4432-999f-fbee997b2201\") " pod="openstack/nova-cell1-8539-account-create-update-9j9p8" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.410273 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b7729cf-7332-4432-999f-fbee997b2201-operator-scripts\") pod \"nova-cell1-8539-account-create-update-9j9p8\" (UID: \"2b7729cf-7332-4432-999f-fbee997b2201\") " pod="openstack/nova-cell1-8539-account-create-update-9j9p8" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.428329 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbm8h\" (UniqueName: \"kubernetes.io/projected/2b7729cf-7332-4432-999f-fbee997b2201-kube-api-access-bbm8h\") pod \"nova-cell1-8539-account-create-update-9j9p8\" (UID: \"2b7729cf-7332-4432-999f-fbee997b2201\") " pod="openstack/nova-cell1-8539-account-create-update-9j9p8" Feb 14 04:32:34 crc kubenswrapper[4867]: I0214 04:32:34.535607 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8539-account-create-update-9j9p8" Feb 14 04:32:35 crc kubenswrapper[4867]: I0214 04:32:35.777454 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.205:3000/\": dial tcp 10.217.0.205:3000: connect: connection refused" Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.355542 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-5ffts"] Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.401230 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.502955 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.511674 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5559ff585f-sb7wb" Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.526665 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7886d5654f-wzr2s" Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.534933 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bfq7\" (UniqueName: \"kubernetes.io/projected/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-kube-api-access-4bfq7\") pod \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.535062 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-combined-ca-bundle\") pod \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.535138 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-scripts\") pod \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.535169 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-config-data\") pod \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.535249 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-sg-core-conf-yaml\") pod \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.535414 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-log-httpd\") pod \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.535435 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-run-httpd\") pod \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\" (UID: \"7ce36665-fb1a-4860-bc8a-5e12431d4cd6\") " Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.538767 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7ce36665-fb1a-4860-bc8a-5e12431d4cd6" (UID: "7ce36665-fb1a-4860-bc8a-5e12431d4cd6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.565101 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-scripts" (OuterVolumeSpecName: "scripts") pod "7ce36665-fb1a-4860-bc8a-5e12431d4cd6" (UID: "7ce36665-fb1a-4860-bc8a-5e12431d4cd6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.570796 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7ce36665-fb1a-4860-bc8a-5e12431d4cd6" (UID: "7ce36665-fb1a-4860-bc8a-5e12431d4cd6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.640203 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-kube-api-access-4bfq7" (OuterVolumeSpecName: "kube-api-access-4bfq7") pod "7ce36665-fb1a-4860-bc8a-5e12431d4cd6" (UID: "7ce36665-fb1a-4860-bc8a-5e12431d4cd6"). InnerVolumeSpecName "kube-api-access-4bfq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.650575 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.650612 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.650624 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bfq7\" (UniqueName: \"kubernetes.io/projected/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-kube-api-access-4bfq7\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.650636 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.702549 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7ce36665-fb1a-4860-bc8a-5e12431d4cd6" (UID: "7ce36665-fb1a-4860-bc8a-5e12431d4cd6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.732729 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-74c5fcd7cb-sr8z9"] Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.733116 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-74c5fcd7cb-sr8z9" podUID="9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149" containerName="neutron-httpd" containerID="cri-o://a00d0ebf0ff2de031204758114db4258ee7b4d688e4e3e8fcab6451b81a33050" gracePeriod=30 Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.733327 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-74c5fcd7cb-sr8z9" podUID="9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149" containerName="neutron-api" containerID="cri-o://a3270a5cb491a003b02a8ff42a33368a493af6d0e24d1558f76c114ff7412184" gracePeriod=30 Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.753059 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:37 crc kubenswrapper[4867]: I0214 04:32:37.797749 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ce36665-fb1a-4860-bc8a-5e12431d4cd6" (UID: "7ce36665-fb1a-4860-bc8a-5e12431d4cd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:37.856252 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:37.875478 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-config-data" (OuterVolumeSpecName: "config-data") pod "7ce36665-fb1a-4860-bc8a-5e12431d4cd6" (UID: "7ce36665-fb1a-4860-bc8a-5e12431d4cd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:37.959467 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce36665-fb1a-4860-bc8a-5e12431d4cd6-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:37.993010 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-5ffts" event={"ID":"289f81c2-9092-4a51-a1b4-8eedaa09aedb","Type":"ContainerStarted","Data":"d44967a1ebd4e2f70ff240361ffa85a32ea8014b336becbf306d8e84e9755446"} Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:37.999746 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7ce36665-fb1a-4860-bc8a-5e12431d4cd6","Type":"ContainerDied","Data":"c5af4b5f8602cd5b59f39b9b073911fd553022dc70a80e4fe1af5abd876f1920"} Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:37.999773 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:37.999810 4867 scope.go:117] "RemoveContainer" containerID="fbab9809e65a478959fcc20b95a52910111448975d370afd8952ef2712282827" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.005703 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e"} Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.050190 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.080970 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.083223 4867 scope.go:117] "RemoveContainer" containerID="48f01dc9aa282450371f6297a6c143b96aef3bdcad1b711eb94a51bfc381c6b0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.142715 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:38 crc kubenswrapper[4867]: E0214 04:32:38.143283 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="proxy-httpd" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.143296 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="proxy-httpd" Feb 14 04:32:38 crc kubenswrapper[4867]: E0214 04:32:38.143308 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="ceilometer-central-agent" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.143314 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="ceilometer-central-agent" Feb 14 04:32:38 crc kubenswrapper[4867]: E0214 04:32:38.143329 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="sg-core" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.143335 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="sg-core" Feb 14 04:32:38 crc kubenswrapper[4867]: E0214 04:32:38.143347 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="ceilometer-notification-agent" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.143353 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="ceilometer-notification-agent" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.143646 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="sg-core" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.143657 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="ceilometer-notification-agent" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.143669 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="ceilometer-central-agent" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.143716 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" containerName="proxy-httpd" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.153888 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.157344 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.160903 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.164520 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30f61907-9cb4-4873-99eb-bbb5adf21fcb-run-httpd\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.164575 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-config-data\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.164630 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.164707 4867 scope.go:117] "RemoveContainer" containerID="fe5aa9c47c46abdc1b30cca0eb25c76a83c0676a5128f68950adc248471821b2" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.164753 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30f61907-9cb4-4873-99eb-bbb5adf21fcb-log-httpd\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.164792 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glsgm\" (UniqueName: \"kubernetes.io/projected/30f61907-9cb4-4873-99eb-bbb5adf21fcb-kube-api-access-glsgm\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.164850 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.164927 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-scripts\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.183747 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.220571 4867 scope.go:117] "RemoveContainer" containerID="fa147253ee7488f81ea6eca1453e9afe783991b356d4806c11a9a0f690b9282a" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.256347 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.256782 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="406727d4-ffca-4ade-b0ca-b5dbfcb23e24" containerName="glance-log" containerID="cri-o://461e174da477dbbe46e48418e6c4b74717f5d942fc161f7932d038f71bf9aca1" gracePeriod=30 Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.257784 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="406727d4-ffca-4ade-b0ca-b5dbfcb23e24" containerName="glance-httpd" containerID="cri-o://12a1d2cb9718993931d34f7f092630cac049d31e66bb907373a6a9ebfd3b2034" gracePeriod=30 Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.273642 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glsgm\" (UniqueName: \"kubernetes.io/projected/30f61907-9cb4-4873-99eb-bbb5adf21fcb-kube-api-access-glsgm\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.274450 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.274498 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-scripts\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.278762 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30f61907-9cb4-4873-99eb-bbb5adf21fcb-run-httpd\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.278963 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-config-data\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.280014 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30f61907-9cb4-4873-99eb-bbb5adf21fcb-run-httpd\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.285565 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-scripts\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.291850 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.293521 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30f61907-9cb4-4873-99eb-bbb5adf21fcb-log-httpd\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.294496 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30f61907-9cb4-4873-99eb-bbb5adf21fcb-log-httpd\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.295072 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.295498 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-config-data\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.299297 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glsgm\" (UniqueName: \"kubernetes.io/projected/30f61907-9cb4-4873-99eb-bbb5adf21fcb-kube-api-access-glsgm\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.301908 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " pod="openstack/ceilometer-0" Feb 14 04:32:38 crc kubenswrapper[4867]: I0214 04:32:38.506106 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.029758 4867 generic.go:334] "Generic (PLEG): container finished" podID="9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149" containerID="a00d0ebf0ff2de031204758114db4258ee7b4d688e4e3e8fcab6451b81a33050" exitCode=0 Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.034731 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ce36665-fb1a-4860-bc8a-5e12431d4cd6" path="/var/lib/kubelet/pods/7ce36665-fb1a-4860-bc8a-5e12431d4cd6/volumes" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.036022 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-5ffts" event={"ID":"289f81c2-9092-4a51-a1b4-8eedaa09aedb","Type":"ContainerStarted","Data":"edb8483472d537c583af237081de995fee4a32c9b18a192549b88c1b5ca41e5a"} Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.036047 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c5fcd7cb-sr8z9" event={"ID":"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149","Type":"ContainerDied","Data":"a00d0ebf0ff2de031204758114db4258ee7b4d688e4e3e8fcab6451b81a33050"} Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.053078 4867 generic.go:334] "Generic (PLEG): container finished" podID="406727d4-ffca-4ade-b0ca-b5dbfcb23e24" containerID="461e174da477dbbe46e48418e6c4b74717f5d942fc161f7932d038f71bf9aca1" exitCode=143 Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.053175 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"406727d4-ffca-4ade-b0ca-b5dbfcb23e24","Type":"ContainerDied","Data":"461e174da477dbbe46e48418e6c4b74717f5d942fc161f7932d038f71bf9aca1"} Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.069808 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"6fdee887-8ecb-4c1e-8a88-0284fc050f0e","Type":"ContainerStarted","Data":"15477c8fe9da164a15217ee678063475cabb536791a08c99852060806de268b3"} Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.084036 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-5ffts" podStartSLOduration=6.084011855 podStartE2EDuration="6.084011855s" podCreationTimestamp="2026-02-14 04:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:39.070377019 +0000 UTC m=+1391.151314333" watchObservedRunningTime="2026-02-14 04:32:39.084011855 +0000 UTC m=+1391.164949169" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.103655 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.108333205 podStartE2EDuration="16.103634763s" podCreationTimestamp="2026-02-14 04:32:23 +0000 UTC" firstStartedPulling="2026-02-14 04:32:23.87359682 +0000 UTC m=+1375.954534134" lastFinishedPulling="2026-02-14 04:32:36.868898378 +0000 UTC m=+1388.949835692" observedRunningTime="2026-02-14 04:32:39.086547333 +0000 UTC m=+1391.167484647" watchObservedRunningTime="2026-02-14 04:32:39.103634763 +0000 UTC m=+1391.184572077" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.150068 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-8539-account-create-update-9j9p8"] Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.165216 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-74c87bfcc9-g5dr4"] Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.177634 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-667b98697-gxqph"] Feb 14 04:32:39 crc kubenswrapper[4867]: W0214 04:32:39.207405 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c28a361_2a59_45f2_baeb_e4d5313b6c17.slice/crio-279dcb9c4b235ad9ee4d170269ff377a20b494792ae727e8d6532186bac5ba51 WatchSource:0}: Error finding container 279dcb9c4b235ad9ee4d170269ff377a20b494792ae727e8d6532186bac5ba51: Status 404 returned error can't find the container with id 279dcb9c4b235ad9ee4d170269ff377a20b494792ae727e8d6532186bac5ba51 Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.435678 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7797898b6d-54xz8"] Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.437432 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.470061 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-8f9d657ff-n8g4q"] Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.471847 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.504723 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7797898b6d-54xz8"] Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.534785 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-8f9d657ff-n8g4q"] Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.546126 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxrqv\" (UniqueName: \"kubernetes.io/projected/bf9a1d71-05e1-40ab-90a7-530d2083fe14-kube-api-access-jxrqv\") pod \"heat-api-8f9d657ff-n8g4q\" (UID: \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\") " pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.546228 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpdz2\" (UniqueName: \"kubernetes.io/projected/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-kube-api-access-rpdz2\") pod \"heat-engine-7797898b6d-54xz8\" (UID: \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\") " pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.546271 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-config-data\") pod \"heat-api-8f9d657ff-n8g4q\" (UID: \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\") " pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.546335 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-config-data-custom\") pod \"heat-engine-7797898b6d-54xz8\" (UID: \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\") " pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.546400 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-config-data-custom\") pod \"heat-api-8f9d657ff-n8g4q\" (UID: \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\") " pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.546434 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-config-data\") pod \"heat-engine-7797898b6d-54xz8\" (UID: \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\") " pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.546452 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-combined-ca-bundle\") pod \"heat-engine-7797898b6d-54xz8\" (UID: \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\") " pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.546470 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-combined-ca-bundle\") pod \"heat-api-8f9d657ff-n8g4q\" (UID: \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\") " pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:39 crc kubenswrapper[4867]: W0214 04:32:39.566703 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7959a0fa_00bd_492c_9892_a8c8727549c6.slice/crio-509c3996717307d8c2159fc143b05ca2d8e25b377427985ddf997628e72d1f60 WatchSource:0}: Error finding container 509c3996717307d8c2159fc143b05ca2d8e25b377427985ddf997628e72d1f60: Status 404 returned error can't find the container with id 509c3996717307d8c2159fc143b05ca2d8e25b377427985ddf997628e72d1f60 Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.577579 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-cf78bc599-cbb7h"] Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.579326 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.604561 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-cf78bc599-cbb7h"] Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.635947 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-677c4ffcdf-n44s6"] Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.666296 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpdz2\" (UniqueName: \"kubernetes.io/projected/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-kube-api-access-rpdz2\") pod \"heat-engine-7797898b6d-54xz8\" (UID: \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\") " pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.666411 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-config-data\") pod \"heat-api-8f9d657ff-n8g4q\" (UID: \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\") " pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.666558 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-config-data-custom\") pod \"heat-cfnapi-cf78bc599-cbb7h\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.666588 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-combined-ca-bundle\") pod \"heat-cfnapi-cf78bc599-cbb7h\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.666638 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-config-data-custom\") pod \"heat-engine-7797898b6d-54xz8\" (UID: \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\") " pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.666662 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-config-data\") pod \"heat-cfnapi-cf78bc599-cbb7h\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.666862 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-config-data-custom\") pod \"heat-api-8f9d657ff-n8g4q\" (UID: \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\") " pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.666930 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-config-data\") pod \"heat-engine-7797898b6d-54xz8\" (UID: \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\") " pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.666965 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-combined-ca-bundle\") pod \"heat-engine-7797898b6d-54xz8\" (UID: \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\") " pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.666988 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-combined-ca-bundle\") pod \"heat-api-8f9d657ff-n8g4q\" (UID: \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\") " pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.667059 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxrqv\" (UniqueName: \"kubernetes.io/projected/bf9a1d71-05e1-40ab-90a7-530d2083fe14-kube-api-access-jxrqv\") pod \"heat-api-8f9d657ff-n8g4q\" (UID: \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\") " pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.667111 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh87m\" (UniqueName: \"kubernetes.io/projected/4e650fa8-a893-47e0-a5d5-0df60430ea9e-kube-api-access-mh87m\") pod \"heat-cfnapi-cf78bc599-cbb7h\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.673206 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-config-data\") pod \"heat-engine-7797898b6d-54xz8\" (UID: \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\") " pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.673946 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-config-data-custom\") pod \"heat-api-8f9d657ff-n8g4q\" (UID: \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\") " pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.674272 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-ccbrl"] Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.674899 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-combined-ca-bundle\") pod \"heat-engine-7797898b6d-54xz8\" (UID: \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\") " pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.694920 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpdz2\" (UniqueName: \"kubernetes.io/projected/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-kube-api-access-rpdz2\") pod \"heat-engine-7797898b6d-54xz8\" (UID: \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\") " pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.695891 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-config-data-custom\") pod \"heat-engine-7797898b6d-54xz8\" (UID: \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\") " pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.703623 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-config-data\") pod \"heat-api-8f9d657ff-n8g4q\" (UID: \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\") " pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.705927 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-combined-ca-bundle\") pod \"heat-api-8f9d657ff-n8g4q\" (UID: \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\") " pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.707251 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxrqv\" (UniqueName: \"kubernetes.io/projected/bf9a1d71-05e1-40ab-90a7-530d2083fe14-kube-api-access-jxrqv\") pod \"heat-api-8f9d657ff-n8g4q\" (UID: \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\") " pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.770107 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh87m\" (UniqueName: \"kubernetes.io/projected/4e650fa8-a893-47e0-a5d5-0df60430ea9e-kube-api-access-mh87m\") pod \"heat-cfnapi-cf78bc599-cbb7h\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.770240 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-config-data-custom\") pod \"heat-cfnapi-cf78bc599-cbb7h\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.770261 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-combined-ca-bundle\") pod \"heat-cfnapi-cf78bc599-cbb7h\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.770291 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-config-data\") pod \"heat-cfnapi-cf78bc599-cbb7h\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.787681 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-config-data\") pod \"heat-cfnapi-cf78bc599-cbb7h\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.793474 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-combined-ca-bundle\") pod \"heat-cfnapi-cf78bc599-cbb7h\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.804380 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-config-data-custom\") pod \"heat-cfnapi-cf78bc599-cbb7h\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.812675 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh87m\" (UniqueName: \"kubernetes.io/projected/4e650fa8-a893-47e0-a5d5-0df60430ea9e-kube-api-access-mh87m\") pod \"heat-cfnapi-cf78bc599-cbb7h\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.900698 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-8094-account-create-update-pbbgl"] Feb 14 04:32:39 crc kubenswrapper[4867]: W0214 04:32:39.909912 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod708fbc3f_a05a_4b29_b455_32db117495d1.slice/crio-4cf6961920f386662ea24ebe41d55c71401248492bb629399ef841615543fa48 WatchSource:0}: Error finding container 4cf6961920f386662ea24ebe41d55c71401248492bb629399ef841615543fa48: Status 404 returned error can't find the container with id 4cf6961920f386662ea24ebe41d55c71401248492bb629399ef841615543fa48 Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.923467 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-t8trt"] Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.952622 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a338-account-create-update-2zjhb"] Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.979963 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:32:39 crc kubenswrapper[4867]: I0214 04:32:39.980896 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-slfhr"] Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.117017 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.145756 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.163691 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.181238 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-t8trt" event={"ID":"708fbc3f-a05a-4b29-b455-32db117495d1","Type":"ContainerStarted","Data":"4cf6961920f386662ea24ebe41d55c71401248492bb629399ef841615543fa48"} Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.189318 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a338-account-create-update-2zjhb" event={"ID":"041c55d6-87c7-47b4-a53b-9b38cb85e3d2","Type":"ContainerStarted","Data":"38548c5a0efacccdfcfdf4445dc4dbf80ccfe685a7da35040dbadb7094f914d2"} Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.206490 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-667b98697-gxqph" event={"ID":"4fd29ee2-33af-4629-8c0d-fa62c0e07240","Type":"ContainerStarted","Data":"6a911f22f2445bf520e5b58ee0d37ec6810d7143ae0f24d44f2a1ba98f13ca47"} Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.228382 4867 generic.go:334] "Generic (PLEG): container finished" podID="289f81c2-9092-4a51-a1b4-8eedaa09aedb" containerID="edb8483472d537c583af237081de995fee4a32c9b18a192549b88c1b5ca41e5a" exitCode=0 Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.231410 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-5ffts" event={"ID":"289f81c2-9092-4a51-a1b4-8eedaa09aedb","Type":"ContainerDied","Data":"edb8483472d537c583af237081de995fee4a32c9b18a192549b88c1b5ca41e5a"} Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.237747 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" event={"ID":"7959a0fa-00bd-492c-9892-a8c8727549c6","Type":"ContainerStarted","Data":"509c3996717307d8c2159fc143b05ca2d8e25b377427985ddf997628e72d1f60"} Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.239650 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8539-account-create-update-9j9p8" event={"ID":"2b7729cf-7332-4432-999f-fbee997b2201","Type":"ContainerStarted","Data":"6bd7d606fb9b6188c28f7b964e2aed897ff801c850465bbc0ee30e5f3fa5796c"} Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.239679 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8539-account-create-update-9j9p8" event={"ID":"2b7729cf-7332-4432-999f-fbee997b2201","Type":"ContainerStarted","Data":"b1d07c0e74e8771e0fbf29c29a6ed70e22ae7cb3f29a34ff3052d92b0f985a1f"} Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.258469 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-slfhr" event={"ID":"730dbd9b-ddff-4d09-89ff-b9135ed83042","Type":"ContainerStarted","Data":"26251869056a11a68a5d33b008a4b88fb45a9155c0e2b8d4aa9fdfe9d69f6cab"} Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.289379 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8094-account-create-update-pbbgl" event={"ID":"80c71d92-a9d1-4256-b7be-678dc34d1562","Type":"ContainerStarted","Data":"073c45a9d481932551862dd339dfbf035cc064529affc0929ce845e3152133c0"} Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.309420 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-677c4ffcdf-n44s6" event={"ID":"a2ce3fe5-1f15-484b-a608-da9f03d714c9","Type":"ContainerStarted","Data":"5411ca415d9a87d0850d6fbf4033b3de2e9b4aed86c0a53707211fd73a6a37cc"} Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.326167 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" event={"ID":"6c28a361-2a59-45f2-baeb-e4d5313b6c17","Type":"ContainerStarted","Data":"279dcb9c4b235ad9ee4d170269ff377a20b494792ae727e8d6532186bac5ba51"} Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.339206 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-8539-account-create-update-9j9p8" podStartSLOduration=6.339184721 podStartE2EDuration="6.339184721s" podCreationTimestamp="2026-02-14 04:32:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:40.293104413 +0000 UTC m=+1392.374041747" watchObservedRunningTime="2026-02-14 04:32:40.339184721 +0000 UTC m=+1392.420122035" Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.795436 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7797898b6d-54xz8"] Feb 14 04:32:40 crc kubenswrapper[4867]: W0214 04:32:40.858697 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7535f37c_f2f6_4e75_bfa2_48211fe86ef6.slice/crio-dd3c354011933e0f94727b4d8a7a0061c7e339109544dc62c211e6c435dc4d43 WatchSource:0}: Error finding container dd3c354011933e0f94727b4d8a7a0061c7e339109544dc62c211e6c435dc4d43: Status 404 returned error can't find the container with id dd3c354011933e0f94727b4d8a7a0061c7e339109544dc62c211e6c435dc4d43 Feb 14 04:32:40 crc kubenswrapper[4867]: I0214 04:32:40.953958 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-8f9d657ff-n8g4q"] Feb 14 04:32:41 crc kubenswrapper[4867]: W0214 04:32:41.021764 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf9a1d71_05e1_40ab_90a7_530d2083fe14.slice/crio-da29745824d45aedf75030755306f42e86da161913c87bf4c3798a011179b320 WatchSource:0}: Error finding container da29745824d45aedf75030755306f42e86da161913c87bf4c3798a011179b320: Status 404 returned error can't find the container with id da29745824d45aedf75030755306f42e86da161913c87bf4c3798a011179b320 Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.285372 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-cf78bc599-cbb7h"] Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.357854 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30f61907-9cb4-4873-99eb-bbb5adf21fcb","Type":"ContainerStarted","Data":"0c811d9a27d93bea50cf31c5a59216074fd035a7dfb9975cb4e0ef8eaca3d79f"} Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.366620 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a338-account-create-update-2zjhb" event={"ID":"041c55d6-87c7-47b4-a53b-9b38cb85e3d2","Type":"ContainerStarted","Data":"ac04f78f97056d2b2550db33626b10963bebb9d175cf60c35210d274045c9458"} Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.371037 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7797898b6d-54xz8" event={"ID":"7535f37c-f2f6-4e75-bfa2-48211fe86ef6","Type":"ContainerStarted","Data":"dd3c354011933e0f94727b4d8a7a0061c7e339109544dc62c211e6c435dc4d43"} Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.377628 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8094-account-create-update-pbbgl" event={"ID":"80c71d92-a9d1-4256-b7be-678dc34d1562","Type":"ContainerStarted","Data":"d2f2315be8742d702e7dd2d0f528c431c081e7e1ce092b2f26f01dd567075c43"} Feb 14 04:32:41 crc kubenswrapper[4867]: W0214 04:32:41.382444 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e650fa8_a893_47e0_a5d5_0df60430ea9e.slice/crio-a2992054f9a747435b4dfa57d015a5d3a94fc0840d14d8df3c6c61038a7f9365 WatchSource:0}: Error finding container a2992054f9a747435b4dfa57d015a5d3a94fc0840d14d8df3c6c61038a7f9365: Status 404 returned error can't find the container with id a2992054f9a747435b4dfa57d015a5d3a94fc0840d14d8df3c6c61038a7f9365 Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.411640 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-a338-account-create-update-2zjhb" podStartSLOduration=8.411617046 podStartE2EDuration="8.411617046s" podCreationTimestamp="2026-02-14 04:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:41.404826213 +0000 UTC m=+1393.485763537" watchObservedRunningTime="2026-02-14 04:32:41.411617046 +0000 UTC m=+1393.492554360" Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.414579 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-677c4ffcdf-n44s6" event={"ID":"a2ce3fe5-1f15-484b-a608-da9f03d714c9","Type":"ContainerStarted","Data":"6a3313dda26c1a2d9982bba482eb657c4e81d8dc170b8fa9912ec40df49eb639"} Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.415585 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.419950 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-t8trt" event={"ID":"708fbc3f-a05a-4b29-b455-32db117495d1","Type":"ContainerStarted","Data":"0c5aa3d36bd716587576d157b08b003ad1372b31da48794e4d003f7f4a82a1b3"} Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.427624 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-8094-account-create-update-pbbgl" podStartSLOduration=8.427600295 podStartE2EDuration="8.427600295s" podCreationTimestamp="2026-02-14 04:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:41.422752855 +0000 UTC m=+1393.503690169" watchObservedRunningTime="2026-02-14 04:32:41.427600295 +0000 UTC m=+1393.508537609" Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.428319 4867 generic.go:334] "Generic (PLEG): container finished" podID="7959a0fa-00bd-492c-9892-a8c8727549c6" containerID="82838cd053ec19d9355b8bed3bca33d40ca78328ccc5425dbe3475e660e9969c" exitCode=0 Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.428411 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" event={"ID":"7959a0fa-00bd-492c-9892-a8c8727549c6","Type":"ContainerDied","Data":"82838cd053ec19d9355b8bed3bca33d40ca78328ccc5425dbe3475e660e9969c"} Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.443796 4867 generic.go:334] "Generic (PLEG): container finished" podID="2b7729cf-7332-4432-999f-fbee997b2201" containerID="6bd7d606fb9b6188c28f7b964e2aed897ff801c850465bbc0ee30e5f3fa5796c" exitCode=0 Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.443860 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8539-account-create-update-9j9p8" event={"ID":"2b7729cf-7332-4432-999f-fbee997b2201","Type":"ContainerDied","Data":"6bd7d606fb9b6188c28f7b964e2aed897ff801c850465bbc0ee30e5f3fa5796c"} Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.461927 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8f9d657ff-n8g4q" event={"ID":"bf9a1d71-05e1-40ab-90a7-530d2083fe14","Type":"ContainerStarted","Data":"da29745824d45aedf75030755306f42e86da161913c87bf4c3798a011179b320"} Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.475006 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-677c4ffcdf-n44s6" podStartSLOduration=9.474984328 podStartE2EDuration="9.474984328s" podCreationTimestamp="2026-02-14 04:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:41.443427741 +0000 UTC m=+1393.524365045" watchObservedRunningTime="2026-02-14 04:32:41.474984328 +0000 UTC m=+1393.555921642" Feb 14 04:32:41 crc kubenswrapper[4867]: I0214 04:32:41.505461 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-t8trt" podStartSLOduration=8.505434727 podStartE2EDuration="8.505434727s" podCreationTimestamp="2026-02-14 04:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:41.462524264 +0000 UTC m=+1393.543461578" watchObservedRunningTime="2026-02-14 04:32:41.505434727 +0000 UTC m=+1393.586372041" Feb 14 04:32:41 crc kubenswrapper[4867]: E0214 04:32:41.911025 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod406727d4_ffca_4ade_b0ca_b5dbfcb23e24.slice/crio-conmon-12a1d2cb9718993931d34f7f092630cac049d31e66bb907373a6a9ebfd3b2034.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod406727d4_ffca_4ade_b0ca_b5dbfcb23e24.slice/crio-12a1d2cb9718993931d34f7f092630cac049d31e66bb907373a6a9ebfd3b2034.scope\": RecentStats: unable to find data in memory cache]" Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.481155 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" event={"ID":"7959a0fa-00bd-492c-9892-a8c8727549c6","Type":"ContainerStarted","Data":"5a01ea22a86b95bd3d047ecc780ee7786ac3f26352c9a5ce1e038cc9e891bc74"} Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.481725 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.485914 4867 generic.go:334] "Generic (PLEG): container finished" podID="9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149" containerID="a3270a5cb491a003b02a8ff42a33368a493af6d0e24d1558f76c114ff7412184" exitCode=0 Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.485995 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c5fcd7cb-sr8z9" event={"ID":"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149","Type":"ContainerDied","Data":"a3270a5cb491a003b02a8ff42a33368a493af6d0e24d1558f76c114ff7412184"} Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.487556 4867 generic.go:334] "Generic (PLEG): container finished" podID="041c55d6-87c7-47b4-a53b-9b38cb85e3d2" containerID="ac04f78f97056d2b2550db33626b10963bebb9d175cf60c35210d274045c9458" exitCode=0 Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.487621 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a338-account-create-update-2zjhb" event={"ID":"041c55d6-87c7-47b4-a53b-9b38cb85e3d2","Type":"ContainerDied","Data":"ac04f78f97056d2b2550db33626b10963bebb9d175cf60c35210d274045c9458"} Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.490192 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7797898b6d-54xz8" event={"ID":"7535f37c-f2f6-4e75-bfa2-48211fe86ef6","Type":"ContainerStarted","Data":"f9f2e84685b68ba026ed32e937f1e9734f0455c3a2cb5f5a9465424b4369a198"} Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.490399 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.506855 4867 generic.go:334] "Generic (PLEG): container finished" podID="730dbd9b-ddff-4d09-89ff-b9135ed83042" containerID="3e1ef6da3ebdc2673f2981d47e0b77af1c8ade8d3cd5fb3292ef5cb9e14386e5" exitCode=0 Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.506959 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-slfhr" event={"ID":"730dbd9b-ddff-4d09-89ff-b9135ed83042","Type":"ContainerDied","Data":"3e1ef6da3ebdc2673f2981d47e0b77af1c8ade8d3cd5fb3292ef5cb9e14386e5"} Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.509555 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" podStartSLOduration=10.509477123 podStartE2EDuration="10.509477123s" podCreationTimestamp="2026-02-14 04:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:42.497210554 +0000 UTC m=+1394.578147868" watchObservedRunningTime="2026-02-14 04:32:42.509477123 +0000 UTC m=+1394.590414437" Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.521059 4867 generic.go:334] "Generic (PLEG): container finished" podID="708fbc3f-a05a-4b29-b455-32db117495d1" containerID="0c5aa3d36bd716587576d157b08b003ad1372b31da48794e4d003f7f4a82a1b3" exitCode=0 Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.521407 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-t8trt" event={"ID":"708fbc3f-a05a-4b29-b455-32db117495d1","Type":"ContainerDied","Data":"0c5aa3d36bd716587576d157b08b003ad1372b31da48794e4d003f7f4a82a1b3"} Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.531105 4867 generic.go:334] "Generic (PLEG): container finished" podID="406727d4-ffca-4ade-b0ca-b5dbfcb23e24" containerID="12a1d2cb9718993931d34f7f092630cac049d31e66bb907373a6a9ebfd3b2034" exitCode=0 Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.531189 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"406727d4-ffca-4ade-b0ca-b5dbfcb23e24","Type":"ContainerDied","Data":"12a1d2cb9718993931d34f7f092630cac049d31e66bb907373a6a9ebfd3b2034"} Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.538744 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7797898b6d-54xz8" podStartSLOduration=3.538724699 podStartE2EDuration="3.538724699s" podCreationTimestamp="2026-02-14 04:32:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:42.53578193 +0000 UTC m=+1394.616719254" watchObservedRunningTime="2026-02-14 04:32:42.538724699 +0000 UTC m=+1394.619662013" Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.539101 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30f61907-9cb4-4873-99eb-bbb5adf21fcb","Type":"ContainerStarted","Data":"979729ed029e7493c86fa97c73b6e4c07235cd2c42a9dffb387845d8efe2d144"} Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.544047 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-cf78bc599-cbb7h" event={"ID":"4e650fa8-a893-47e0-a5d5-0df60430ea9e","Type":"ContainerStarted","Data":"a2992054f9a747435b4dfa57d015a5d3a94fc0840d14d8df3c6c61038a7f9365"} Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.546114 4867 generic.go:334] "Generic (PLEG): container finished" podID="80c71d92-a9d1-4256-b7be-678dc34d1562" containerID="d2f2315be8742d702e7dd2d0f528c431c081e7e1ce092b2f26f01dd567075c43" exitCode=0 Feb 14 04:32:42 crc kubenswrapper[4867]: I0214 04:32:42.546397 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8094-account-create-update-pbbgl" event={"ID":"80c71d92-a9d1-4256-b7be-678dc34d1562","Type":"ContainerDied","Data":"d2f2315be8742d702e7dd2d0f528c431c081e7e1ce092b2f26f01dd567075c43"} Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.028531 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-667b98697-gxqph"] Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.032972 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-74c87bfcc9-g5dr4"] Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.066724 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6f55d59bf5-wfw72"] Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.068434 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.072049 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.072115 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.087928 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-74d8ffb764-wz9cp"] Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.090139 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.098992 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.099225 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.103535 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6f55d59bf5-wfw72"] Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.123761 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-74d8ffb764-wz9cp"] Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.155079 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-config-data\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.155151 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-combined-ca-bundle\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.155259 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf86p\" (UniqueName: \"kubernetes.io/projected/fe0cc502-2f6a-41d9-8761-da930802201e-kube-api-access-vf86p\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.155325 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-config-data-custom\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.155386 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-internal-tls-certs\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.155403 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-public-tls-certs\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.257484 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-config-data\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.257617 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-combined-ca-bundle\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.257685 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-public-tls-certs\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.257752 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf86p\" (UniqueName: \"kubernetes.io/projected/fe0cc502-2f6a-41d9-8761-da930802201e-kube-api-access-vf86p\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.257817 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-combined-ca-bundle\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.257842 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-internal-tls-certs\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.257866 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-config-data-custom\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.257900 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-config-data-custom\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.258018 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-internal-tls-certs\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.258036 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-public-tls-certs\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.258053 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m47wg\" (UniqueName: \"kubernetes.io/projected/16f76a07-1b4d-4057-84c6-0cae915e01f7-kube-api-access-m47wg\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.258079 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-config-data\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.264813 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-internal-tls-certs\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.264885 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-public-tls-certs\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.265222 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-combined-ca-bundle\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.267660 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-config-data-custom\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.272773 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-config-data\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.281289 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf86p\" (UniqueName: \"kubernetes.io/projected/fe0cc502-2f6a-41d9-8761-da930802201e-kube-api-access-vf86p\") pod \"heat-api-6f55d59bf5-wfw72\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.362640 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-public-tls-certs\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.362948 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-combined-ca-bundle\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.362974 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-internal-tls-certs\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.363025 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-config-data-custom\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.363091 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m47wg\" (UniqueName: \"kubernetes.io/projected/16f76a07-1b4d-4057-84c6-0cae915e01f7-kube-api-access-m47wg\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.363443 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-config-data\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.372284 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-config-data\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.372292 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-public-tls-certs\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.372673 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-internal-tls-certs\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.372677 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-combined-ca-bundle\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.375816 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-config-data-custom\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.378160 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m47wg\" (UniqueName: \"kubernetes.io/projected/16f76a07-1b4d-4057-84c6-0cae915e01f7-kube-api-access-m47wg\") pod \"heat-cfnapi-74d8ffb764-wz9cp\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.400286 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.430204 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.530958 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8539-account-create-update-9j9p8" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.542542 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-5ffts" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.554618 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.564602 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.564888 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c5fcd7cb-sr8z9" event={"ID":"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149","Type":"ContainerDied","Data":"39d679b02b54e70585a87ea7dbf473acb26533d3e4ea7319177999bccaf06766"} Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.564967 4867 scope.go:117] "RemoveContainer" containerID="a00d0ebf0ff2de031204758114db4258ee7b4d688e4e3e8fcab6451b81a33050" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.571718 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-8539-account-create-update-9j9p8" event={"ID":"2b7729cf-7332-4432-999f-fbee997b2201","Type":"ContainerDied","Data":"b1d07c0e74e8771e0fbf29c29a6ed70e22ae7cb3f29a34ff3052d92b0f985a1f"} Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.571759 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1d07c0e74e8771e0fbf29c29a6ed70e22ae7cb3f29a34ff3052d92b0f985a1f" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.571827 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-8539-account-create-update-9j9p8" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.577924 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"406727d4-ffca-4ade-b0ca-b5dbfcb23e24","Type":"ContainerDied","Data":"a3cc1da73263e85bbf2b7d750ab646192fbf22c988007a55f775707de3030a59"} Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.578029 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.581561 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-5ffts" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.582287 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-5ffts" event={"ID":"289f81c2-9092-4a51-a1b4-8eedaa09aedb","Type":"ContainerDied","Data":"d44967a1ebd4e2f70ff240361ffa85a32ea8014b336becbf306d8e84e9755446"} Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.582370 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d44967a1ebd4e2f70ff240361ffa85a32ea8014b336becbf306d8e84e9755446" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.669992 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbm8h\" (UniqueName: \"kubernetes.io/projected/2b7729cf-7332-4432-999f-fbee997b2201-kube-api-access-bbm8h\") pod \"2b7729cf-7332-4432-999f-fbee997b2201\" (UID: \"2b7729cf-7332-4432-999f-fbee997b2201\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.670170 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-httpd-config\") pod \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.670204 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-scripts\") pod \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.670234 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b7729cf-7332-4432-999f-fbee997b2201-operator-scripts\") pod \"2b7729cf-7332-4432-999f-fbee997b2201\" (UID: \"2b7729cf-7332-4432-999f-fbee997b2201\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.670324 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-combined-ca-bundle\") pod \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.670370 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-config\") pod \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.670405 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/289f81c2-9092-4a51-a1b4-8eedaa09aedb-operator-scripts\") pod \"289f81c2-9092-4a51-a1b4-8eedaa09aedb\" (UID: \"289f81c2-9092-4a51-a1b4-8eedaa09aedb\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.670438 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-ovndb-tls-certs\") pod \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.670469 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-httpd-run\") pod \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.670561 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-config-data\") pod \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.670764 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") pod \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.670790 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfw6n\" (UniqueName: \"kubernetes.io/projected/289f81c2-9092-4a51-a1b4-8eedaa09aedb-kube-api-access-pfw6n\") pod \"289f81c2-9092-4a51-a1b4-8eedaa09aedb\" (UID: \"289f81c2-9092-4a51-a1b4-8eedaa09aedb\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.670816 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-logs\") pod \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.670869 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-public-tls-certs\") pod \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.672298 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/289f81c2-9092-4a51-a1b4-8eedaa09aedb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "289f81c2-9092-4a51-a1b4-8eedaa09aedb" (UID: "289f81c2-9092-4a51-a1b4-8eedaa09aedb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.675600 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncmbs\" (UniqueName: \"kubernetes.io/projected/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-kube-api-access-ncmbs\") pod \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\" (UID: \"406727d4-ffca-4ade-b0ca-b5dbfcb23e24\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.675663 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prsd4\" (UniqueName: \"kubernetes.io/projected/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-kube-api-access-prsd4\") pod \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.675690 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-combined-ca-bundle\") pod \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\" (UID: \"9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149\") " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.676872 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/289f81c2-9092-4a51-a1b4-8eedaa09aedb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.681029 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-scripts" (OuterVolumeSpecName: "scripts") pod "406727d4-ffca-4ade-b0ca-b5dbfcb23e24" (UID: "406727d4-ffca-4ade-b0ca-b5dbfcb23e24"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.681316 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "406727d4-ffca-4ade-b0ca-b5dbfcb23e24" (UID: "406727d4-ffca-4ade-b0ca-b5dbfcb23e24"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.681773 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b7729cf-7332-4432-999f-fbee997b2201-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2b7729cf-7332-4432-999f-fbee997b2201" (UID: "2b7729cf-7332-4432-999f-fbee997b2201"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.682128 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-logs" (OuterVolumeSpecName: "logs") pod "406727d4-ffca-4ade-b0ca-b5dbfcb23e24" (UID: "406727d4-ffca-4ade-b0ca-b5dbfcb23e24"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.691417 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b7729cf-7332-4432-999f-fbee997b2201-kube-api-access-bbm8h" (OuterVolumeSpecName: "kube-api-access-bbm8h") pod "2b7729cf-7332-4432-999f-fbee997b2201" (UID: "2b7729cf-7332-4432-999f-fbee997b2201"). InnerVolumeSpecName "kube-api-access-bbm8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.691780 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-kube-api-access-prsd4" (OuterVolumeSpecName: "kube-api-access-prsd4") pod "9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149" (UID: "9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149"). InnerVolumeSpecName "kube-api-access-prsd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.694764 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-kube-api-access-ncmbs" (OuterVolumeSpecName: "kube-api-access-ncmbs") pod "406727d4-ffca-4ade-b0ca-b5dbfcb23e24" (UID: "406727d4-ffca-4ade-b0ca-b5dbfcb23e24"). InnerVolumeSpecName "kube-api-access-ncmbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.725667 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149" (UID: "9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.734775 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/289f81c2-9092-4a51-a1b4-8eedaa09aedb-kube-api-access-pfw6n" (OuterVolumeSpecName: "kube-api-access-pfw6n") pod "289f81c2-9092-4a51-a1b4-8eedaa09aedb" (UID: "289f81c2-9092-4a51-a1b4-8eedaa09aedb"). InnerVolumeSpecName "kube-api-access-pfw6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.746079 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d" (OuterVolumeSpecName: "glance") pod "406727d4-ffca-4ade-b0ca-b5dbfcb23e24" (UID: "406727d4-ffca-4ade-b0ca-b5dbfcb23e24"). InnerVolumeSpecName "pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.779023 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncmbs\" (UniqueName: \"kubernetes.io/projected/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-kube-api-access-ncmbs\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.779237 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prsd4\" (UniqueName: \"kubernetes.io/projected/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-kube-api-access-prsd4\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.779295 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbm8h\" (UniqueName: \"kubernetes.io/projected/2b7729cf-7332-4432-999f-fbee997b2201-kube-api-access-bbm8h\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.779365 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.779419 4867 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.779478 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b7729cf-7332-4432-999f-fbee997b2201-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.779548 4867 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.779613 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") on node \"crc\" " Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.779666 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfw6n\" (UniqueName: \"kubernetes.io/projected/289f81c2-9092-4a51-a1b4-8eedaa09aedb-kube-api-access-pfw6n\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.779914 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-logs\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.823009 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "406727d4-ffca-4ade-b0ca-b5dbfcb23e24" (UID: "406727d4-ffca-4ade-b0ca-b5dbfcb23e24"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.827225 4867 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.827398 4867 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d") on node "crc" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.833159 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-config" (OuterVolumeSpecName: "config") pod "9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149" (UID: "9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.858791 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "406727d4-ffca-4ade-b0ca-b5dbfcb23e24" (UID: "406727d4-ffca-4ade-b0ca-b5dbfcb23e24"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.859830 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149" (UID: "9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.890721 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.891129 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.891157 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.891172 4867 reconciler_common.go:293] "Volume detached for volume \"pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.891187 4867 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.927122 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-config-data" (OuterVolumeSpecName: "config-data") pod "406727d4-ffca-4ade-b0ca-b5dbfcb23e24" (UID: "406727d4-ffca-4ade-b0ca-b5dbfcb23e24"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.967636 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149" (UID: "9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.993919 4867 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:43 crc kubenswrapper[4867]: I0214 04:32:43.993954 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/406727d4-ffca-4ade-b0ca-b5dbfcb23e24-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.244494 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.270064 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.291312 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 04:32:44 crc kubenswrapper[4867]: E0214 04:32:44.292123 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406727d4-ffca-4ade-b0ca-b5dbfcb23e24" containerName="glance-httpd" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.292137 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="406727d4-ffca-4ade-b0ca-b5dbfcb23e24" containerName="glance-httpd" Feb 14 04:32:44 crc kubenswrapper[4867]: E0214 04:32:44.292150 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b7729cf-7332-4432-999f-fbee997b2201" containerName="mariadb-account-create-update" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.292156 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b7729cf-7332-4432-999f-fbee997b2201" containerName="mariadb-account-create-update" Feb 14 04:32:44 crc kubenswrapper[4867]: E0214 04:32:44.292173 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149" containerName="neutron-httpd" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.292180 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149" containerName="neutron-httpd" Feb 14 04:32:44 crc kubenswrapper[4867]: E0214 04:32:44.292190 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="289f81c2-9092-4a51-a1b4-8eedaa09aedb" containerName="mariadb-database-create" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.292196 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="289f81c2-9092-4a51-a1b4-8eedaa09aedb" containerName="mariadb-database-create" Feb 14 04:32:44 crc kubenswrapper[4867]: E0214 04:32:44.292240 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406727d4-ffca-4ade-b0ca-b5dbfcb23e24" containerName="glance-log" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.292246 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="406727d4-ffca-4ade-b0ca-b5dbfcb23e24" containerName="glance-log" Feb 14 04:32:44 crc kubenswrapper[4867]: E0214 04:32:44.292258 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149" containerName="neutron-api" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.292263 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149" containerName="neutron-api" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.292455 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="289f81c2-9092-4a51-a1b4-8eedaa09aedb" containerName="mariadb-database-create" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.292467 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="406727d4-ffca-4ade-b0ca-b5dbfcb23e24" containerName="glance-httpd" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.292476 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149" containerName="neutron-api" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.292490 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149" containerName="neutron-httpd" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.292516 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b7729cf-7332-4432-999f-fbee997b2201" containerName="mariadb-account-create-update" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.292538 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="406727d4-ffca-4ade-b0ca-b5dbfcb23e24" containerName="glance-log" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.298004 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.301467 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.301664 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.322732 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.408269 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.408618 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5e42dca-0c7d-485a-95bc-b26db4e12369-config-data\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.410466 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5e42dca-0c7d-485a-95bc-b26db4e12369-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.410639 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfv5z\" (UniqueName: \"kubernetes.io/projected/f5e42dca-0c7d-485a-95bc-b26db4e12369-kube-api-access-cfv5z\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.410716 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5e42dca-0c7d-485a-95bc-b26db4e12369-logs\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.411038 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5e42dca-0c7d-485a-95bc-b26db4e12369-scripts\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.411754 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5e42dca-0c7d-485a-95bc-b26db4e12369-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.411831 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5e42dca-0c7d-485a-95bc-b26db4e12369-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.514060 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5e42dca-0c7d-485a-95bc-b26db4e12369-scripts\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.515150 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5e42dca-0c7d-485a-95bc-b26db4e12369-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.515542 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5e42dca-0c7d-485a-95bc-b26db4e12369-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.515797 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.516031 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5e42dca-0c7d-485a-95bc-b26db4e12369-config-data\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.516129 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5e42dca-0c7d-485a-95bc-b26db4e12369-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.516220 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfv5z\" (UniqueName: \"kubernetes.io/projected/f5e42dca-0c7d-485a-95bc-b26db4e12369-kube-api-access-cfv5z\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.516261 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5e42dca-0c7d-485a-95bc-b26db4e12369-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.516341 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5e42dca-0c7d-485a-95bc-b26db4e12369-logs\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.516745 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5e42dca-0c7d-485a-95bc-b26db4e12369-logs\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.519942 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5e42dca-0c7d-485a-95bc-b26db4e12369-config-data\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.520529 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5e42dca-0c7d-485a-95bc-b26db4e12369-scripts\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.521158 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5e42dca-0c7d-485a-95bc-b26db4e12369-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.523686 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.523740 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2911fee5623424610909110255172e6a670235da2c51b706f28d869aaa21b2f4/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.525491 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5e42dca-0c7d-485a-95bc-b26db4e12369-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.538417 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfv5z\" (UniqueName: \"kubernetes.io/projected/f5e42dca-0c7d-485a-95bc-b26db4e12369-kube-api-access-cfv5z\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.608737 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74c5fcd7cb-sr8z9" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.633101 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36cbded9-e56a-4712-a8db-251c7dcbb87d\") pod \"glance-default-external-api-0\" (UID: \"f5e42dca-0c7d-485a-95bc-b26db4e12369\") " pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.663680 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-74c5fcd7cb-sr8z9"] Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.674388 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-74c5fcd7cb-sr8z9"] Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.710409 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.858854 4867 scope.go:117] "RemoveContainer" containerID="a3270a5cb491a003b02a8ff42a33368a493af6d0e24d1558f76c114ff7412184" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.919473 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-t8trt" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.960285 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-slfhr" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.964742 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8094-account-create-update-pbbgl" Feb 14 04:32:44 crc kubenswrapper[4867]: I0214 04:32:44.978492 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a338-account-create-update-2zjhb" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.040712 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="406727d4-ffca-4ade-b0ca-b5dbfcb23e24" path="/var/lib/kubelet/pods/406727d4-ffca-4ade-b0ca-b5dbfcb23e24/volumes" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.042668 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149" path="/var/lib/kubelet/pods/9dd8bb15-ad3b-4fd9-985a-f6aaf2a8e149/volumes" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.045627 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/708fbc3f-a05a-4b29-b455-32db117495d1-operator-scripts\") pod \"708fbc3f-a05a-4b29-b455-32db117495d1\" (UID: \"708fbc3f-a05a-4b29-b455-32db117495d1\") " Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.045674 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/730dbd9b-ddff-4d09-89ff-b9135ed83042-operator-scripts\") pod \"730dbd9b-ddff-4d09-89ff-b9135ed83042\" (UID: \"730dbd9b-ddff-4d09-89ff-b9135ed83042\") " Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.045805 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80c71d92-a9d1-4256-b7be-678dc34d1562-operator-scripts\") pod \"80c71d92-a9d1-4256-b7be-678dc34d1562\" (UID: \"80c71d92-a9d1-4256-b7be-678dc34d1562\") " Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.045828 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnn7m\" (UniqueName: \"kubernetes.io/projected/80c71d92-a9d1-4256-b7be-678dc34d1562-kube-api-access-fnn7m\") pod \"80c71d92-a9d1-4256-b7be-678dc34d1562\" (UID: \"80c71d92-a9d1-4256-b7be-678dc34d1562\") " Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.046938 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/730dbd9b-ddff-4d09-89ff-b9135ed83042-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "730dbd9b-ddff-4d09-89ff-b9135ed83042" (UID: "730dbd9b-ddff-4d09-89ff-b9135ed83042"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.047415 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80c71d92-a9d1-4256-b7be-678dc34d1562-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "80c71d92-a9d1-4256-b7be-678dc34d1562" (UID: "80c71d92-a9d1-4256-b7be-678dc34d1562"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.050686 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/708fbc3f-a05a-4b29-b455-32db117495d1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "708fbc3f-a05a-4b29-b455-32db117495d1" (UID: "708fbc3f-a05a-4b29-b455-32db117495d1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.051124 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5ch7\" (UniqueName: \"kubernetes.io/projected/708fbc3f-a05a-4b29-b455-32db117495d1-kube-api-access-k5ch7\") pod \"708fbc3f-a05a-4b29-b455-32db117495d1\" (UID: \"708fbc3f-a05a-4b29-b455-32db117495d1\") " Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.051251 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v628v\" (UniqueName: \"kubernetes.io/projected/041c55d6-87c7-47b4-a53b-9b38cb85e3d2-kube-api-access-v628v\") pod \"041c55d6-87c7-47b4-a53b-9b38cb85e3d2\" (UID: \"041c55d6-87c7-47b4-a53b-9b38cb85e3d2\") " Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.051346 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c287t\" (UniqueName: \"kubernetes.io/projected/730dbd9b-ddff-4d09-89ff-b9135ed83042-kube-api-access-c287t\") pod \"730dbd9b-ddff-4d09-89ff-b9135ed83042\" (UID: \"730dbd9b-ddff-4d09-89ff-b9135ed83042\") " Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.051412 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80c71d92-a9d1-4256-b7be-678dc34d1562-kube-api-access-fnn7m" (OuterVolumeSpecName: "kube-api-access-fnn7m") pod "80c71d92-a9d1-4256-b7be-678dc34d1562" (UID: "80c71d92-a9d1-4256-b7be-678dc34d1562"). InnerVolumeSpecName "kube-api-access-fnn7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.051469 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041c55d6-87c7-47b4-a53b-9b38cb85e3d2-operator-scripts\") pod \"041c55d6-87c7-47b4-a53b-9b38cb85e3d2\" (UID: \"041c55d6-87c7-47b4-a53b-9b38cb85e3d2\") " Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.052245 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/041c55d6-87c7-47b4-a53b-9b38cb85e3d2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "041c55d6-87c7-47b4-a53b-9b38cb85e3d2" (UID: "041c55d6-87c7-47b4-a53b-9b38cb85e3d2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.052905 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041c55d6-87c7-47b4-a53b-9b38cb85e3d2-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.052971 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/708fbc3f-a05a-4b29-b455-32db117495d1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.053044 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/730dbd9b-ddff-4d09-89ff-b9135ed83042-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.053097 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80c71d92-a9d1-4256-b7be-678dc34d1562-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.053150 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnn7m\" (UniqueName: \"kubernetes.io/projected/80c71d92-a9d1-4256-b7be-678dc34d1562-kube-api-access-fnn7m\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.055350 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/041c55d6-87c7-47b4-a53b-9b38cb85e3d2-kube-api-access-v628v" (OuterVolumeSpecName: "kube-api-access-v628v") pod "041c55d6-87c7-47b4-a53b-9b38cb85e3d2" (UID: "041c55d6-87c7-47b4-a53b-9b38cb85e3d2"). InnerVolumeSpecName "kube-api-access-v628v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.061192 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/708fbc3f-a05a-4b29-b455-32db117495d1-kube-api-access-k5ch7" (OuterVolumeSpecName: "kube-api-access-k5ch7") pod "708fbc3f-a05a-4b29-b455-32db117495d1" (UID: "708fbc3f-a05a-4b29-b455-32db117495d1"). InnerVolumeSpecName "kube-api-access-k5ch7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.063346 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/730dbd9b-ddff-4d09-89ff-b9135ed83042-kube-api-access-c287t" (OuterVolumeSpecName: "kube-api-access-c287t") pod "730dbd9b-ddff-4d09-89ff-b9135ed83042" (UID: "730dbd9b-ddff-4d09-89ff-b9135ed83042"). InnerVolumeSpecName "kube-api-access-c287t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.087544 4867 scope.go:117] "RemoveContainer" containerID="12a1d2cb9718993931d34f7f092630cac049d31e66bb907373a6a9ebfd3b2034" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.160344 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5ch7\" (UniqueName: \"kubernetes.io/projected/708fbc3f-a05a-4b29-b455-32db117495d1-kube-api-access-k5ch7\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.160651 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v628v\" (UniqueName: \"kubernetes.io/projected/041c55d6-87c7-47b4-a53b-9b38cb85e3d2-kube-api-access-v628v\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.160661 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c287t\" (UniqueName: \"kubernetes.io/projected/730dbd9b-ddff-4d09-89ff-b9135ed83042-kube-api-access-c287t\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.258025 4867 scope.go:117] "RemoveContainer" containerID="461e174da477dbbe46e48418e6c4b74717f5d942fc161f7932d038f71bf9aca1" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.643200 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-t8trt" event={"ID":"708fbc3f-a05a-4b29-b455-32db117495d1","Type":"ContainerDied","Data":"4cf6961920f386662ea24ebe41d55c71401248492bb629399ef841615543fa48"} Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.644220 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cf6961920f386662ea24ebe41d55c71401248492bb629399ef841615543fa48" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.643735 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-t8trt" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.647039 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30f61907-9cb4-4873-99eb-bbb5adf21fcb","Type":"ContainerStarted","Data":"02a21d192b838bcd292ed433b9bda0d9ab33f8abcba6bd3963579f24d84fa41e"} Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.657139 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a338-account-create-update-2zjhb" event={"ID":"041c55d6-87c7-47b4-a53b-9b38cb85e3d2","Type":"ContainerDied","Data":"38548c5a0efacccdfcfdf4445dc4dbf80ccfe685a7da35040dbadb7094f914d2"} Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.657200 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38548c5a0efacccdfcfdf4445dc4dbf80ccfe685a7da35040dbadb7094f914d2" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.657314 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a338-account-create-update-2zjhb" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.660893 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-slfhr" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.660909 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-slfhr" event={"ID":"730dbd9b-ddff-4d09-89ff-b9135ed83042","Type":"ContainerDied","Data":"26251869056a11a68a5d33b008a4b88fb45a9155c0e2b8d4aa9fdfe9d69f6cab"} Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.660957 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26251869056a11a68a5d33b008a4b88fb45a9155c0e2b8d4aa9fdfe9d69f6cab" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.692533 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8094-account-create-update-pbbgl" event={"ID":"80c71d92-a9d1-4256-b7be-678dc34d1562","Type":"ContainerDied","Data":"073c45a9d481932551862dd339dfbf035cc064529affc0929ce845e3152133c0"} Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.692580 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="073c45a9d481932551862dd339dfbf035cc064529affc0929ce845e3152133c0" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.692648 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8094-account-create-update-pbbgl" Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.700197 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6f55d59bf5-wfw72"] Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.848773 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-74d8ffb764-wz9cp"] Feb 14 04:32:45 crc kubenswrapper[4867]: I0214 04:32:45.936184 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.735024 4867 generic.go:334] "Generic (PLEG): container finished" podID="4e650fa8-a893-47e0-a5d5-0df60430ea9e" containerID="5853e720bee74ecffda2b3607cd04e8d46a528baf84e2f915f4143c80e908cce" exitCode=1 Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.735712 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-cf78bc599-cbb7h" event={"ID":"4e650fa8-a893-47e0-a5d5-0df60430ea9e","Type":"ContainerDied","Data":"5853e720bee74ecffda2b3607cd04e8d46a528baf84e2f915f4143c80e908cce"} Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.736067 4867 scope.go:117] "RemoveContainer" containerID="5853e720bee74ecffda2b3607cd04e8d46a528baf84e2f915f4143c80e908cce" Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.746448 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f55d59bf5-wfw72" event={"ID":"fe0cc502-2f6a-41d9-8761-da930802201e","Type":"ContainerStarted","Data":"c20166d492a9fa577b96f854901f8b51fbf65bf3bdc78203cf773c6d9899e503"} Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.746489 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f55d59bf5-wfw72" event={"ID":"fe0cc502-2f6a-41d9-8761-da930802201e","Type":"ContainerStarted","Data":"049d086e76bc10d2a5f14c7d8a9fe02a2d5fd8eadb747b6e9d8413f65e7ceb0e"} Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.747142 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.765235 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-667b98697-gxqph" event={"ID":"4fd29ee2-33af-4629-8c0d-fa62c0e07240","Type":"ContainerStarted","Data":"25e73f24691faede9bee41e1ee55092d6468385d3e91a16311d3419e479b9ed4"} Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.765354 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-667b98697-gxqph" podUID="4fd29ee2-33af-4629-8c0d-fa62c0e07240" containerName="heat-api" containerID="cri-o://25e73f24691faede9bee41e1ee55092d6468385d3e91a16311d3419e479b9ed4" gracePeriod=60 Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.765582 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.791689 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6f55d59bf5-wfw72" podStartSLOduration=3.791660481 podStartE2EDuration="3.791660481s" podCreationTimestamp="2026-02-14 04:32:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:46.784086218 +0000 UTC m=+1398.865023532" watchObservedRunningTime="2026-02-14 04:32:46.791660481 +0000 UTC m=+1398.872597795" Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.795549 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" event={"ID":"16f76a07-1b4d-4057-84c6-0cae915e01f7","Type":"ContainerStarted","Data":"830da82a952f0eb79b72815166e8401af585ff9b46564a5260025bbc1ac28ad6"} Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.796299 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.800991 4867 generic.go:334] "Generic (PLEG): container finished" podID="bf9a1d71-05e1-40ab-90a7-530d2083fe14" containerID="3f0c6b148827ea32a231e9e007d2dafbba391e6afdc3bb2dbabd5ec06a7c50e3" exitCode=1 Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.801061 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8f9d657ff-n8g4q" event={"ID":"bf9a1d71-05e1-40ab-90a7-530d2083fe14","Type":"ContainerDied","Data":"3f0c6b148827ea32a231e9e007d2dafbba391e6afdc3bb2dbabd5ec06a7c50e3"} Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.801776 4867 scope.go:117] "RemoveContainer" containerID="3f0c6b148827ea32a231e9e007d2dafbba391e6afdc3bb2dbabd5ec06a7c50e3" Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.813773 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" event={"ID":"6c28a361-2a59-45f2-baeb-e4d5313b6c17","Type":"ContainerStarted","Data":"d3e9e4355332a2ff0d32cacebbea6af4c97294a693e97fe84efa4e22b02595f6"} Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.813959 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" podUID="6c28a361-2a59-45f2-baeb-e4d5313b6c17" containerName="heat-cfnapi" containerID="cri-o://d3e9e4355332a2ff0d32cacebbea6af4c97294a693e97fe84efa4e22b02595f6" gracePeriod=60 Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.814098 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.821835 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f5e42dca-0c7d-485a-95bc-b26db4e12369","Type":"ContainerStarted","Data":"ee1285703ead2f3b077f5f2b6bf2a06f26f518800f3dabf9a444c8ff6e1390dd"} Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.859735 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-667b98697-gxqph" podStartSLOduration=9.061091547 podStartE2EDuration="14.85971289s" podCreationTimestamp="2026-02-14 04:32:32 +0000 UTC" firstStartedPulling="2026-02-14 04:32:39.214776009 +0000 UTC m=+1391.295713313" lastFinishedPulling="2026-02-14 04:32:45.013397342 +0000 UTC m=+1397.094334656" observedRunningTime="2026-02-14 04:32:46.805862133 +0000 UTC m=+1398.886799447" watchObservedRunningTime="2026-02-14 04:32:46.85971289 +0000 UTC m=+1398.940650204" Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.868690 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30f61907-9cb4-4873-99eb-bbb5adf21fcb","Type":"ContainerStarted","Data":"cb1ce511267c1e1bb4cd8621896e5bff9f87cd7f132d080f973fbe44eacb8ee4"} Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.892711 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" podStartSLOduration=3.892683295 podStartE2EDuration="3.892683295s" podCreationTimestamp="2026-02-14 04:32:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:46.827182375 +0000 UTC m=+1398.908119689" watchObservedRunningTime="2026-02-14 04:32:46.892683295 +0000 UTC m=+1398.973620629" Feb 14 04:32:46 crc kubenswrapper[4867]: I0214 04:32:46.924374 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" podStartSLOduration=9.177062453 podStartE2EDuration="14.924355956s" podCreationTimestamp="2026-02-14 04:32:32 +0000 UTC" firstStartedPulling="2026-02-14 04:32:39.212756075 +0000 UTC m=+1391.293693389" lastFinishedPulling="2026-02-14 04:32:44.960049568 +0000 UTC m=+1397.040986892" observedRunningTime="2026-02-14 04:32:46.9047533 +0000 UTC m=+1398.985690614" watchObservedRunningTime="2026-02-14 04:32:46.924355956 +0000 UTC m=+1399.005293270" Feb 14 04:32:47 crc kubenswrapper[4867]: I0214 04:32:47.564195 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:32:47 crc kubenswrapper[4867]: I0214 04:32:47.708423 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-pq99b"] Feb 14 04:32:47 crc kubenswrapper[4867]: I0214 04:32:47.708700 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" podUID="746b9097-84d0-4d00-a92c-808df9206d8a" containerName="dnsmasq-dns" containerID="cri-o://bbbd663b685e649aba7c6d25d4f5d873760ad4fd5bfb839326762e3bc9aeb52d" gracePeriod=10 Feb 14 04:32:47 crc kubenswrapper[4867]: I0214 04:32:47.899696 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f5e42dca-0c7d-485a-95bc-b26db4e12369","Type":"ContainerStarted","Data":"0118fc901e642976e79b1611b934e8452bfc845018c10ec766a047bc2408aaf8"} Feb 14 04:32:47 crc kubenswrapper[4867]: I0214 04:32:47.903367 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-cf78bc599-cbb7h" event={"ID":"4e650fa8-a893-47e0-a5d5-0df60430ea9e","Type":"ContainerStarted","Data":"f3518d1f2b8b6a76c52e19fef766123bf78780e7f313d75009ab8059dc0d7904"} Feb 14 04:32:47 crc kubenswrapper[4867]: I0214 04:32:47.905247 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:47 crc kubenswrapper[4867]: I0214 04:32:47.908280 4867 generic.go:334] "Generic (PLEG): container finished" podID="4fd29ee2-33af-4629-8c0d-fa62c0e07240" containerID="25e73f24691faede9bee41e1ee55092d6468385d3e91a16311d3419e479b9ed4" exitCode=0 Feb 14 04:32:47 crc kubenswrapper[4867]: I0214 04:32:47.908339 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-667b98697-gxqph" event={"ID":"4fd29ee2-33af-4629-8c0d-fa62c0e07240","Type":"ContainerDied","Data":"25e73f24691faede9bee41e1ee55092d6468385d3e91a16311d3419e479b9ed4"} Feb 14 04:32:47 crc kubenswrapper[4867]: I0214 04:32:47.912876 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" event={"ID":"16f76a07-1b4d-4057-84c6-0cae915e01f7","Type":"ContainerStarted","Data":"11c8bf6db3fba0102b4b30e1ce307cf289b32ee921d87494ebf82f97afd541e7"} Feb 14 04:32:47 crc kubenswrapper[4867]: I0214 04:32:47.917095 4867 generic.go:334] "Generic (PLEG): container finished" podID="6c28a361-2a59-45f2-baeb-e4d5313b6c17" containerID="d3e9e4355332a2ff0d32cacebbea6af4c97294a693e97fe84efa4e22b02595f6" exitCode=0 Feb 14 04:32:47 crc kubenswrapper[4867]: I0214 04:32:47.918072 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" event={"ID":"6c28a361-2a59-45f2-baeb-e4d5313b6c17","Type":"ContainerDied","Data":"d3e9e4355332a2ff0d32cacebbea6af4c97294a693e97fe84efa4e22b02595f6"} Feb 14 04:32:47 crc kubenswrapper[4867]: I0214 04:32:47.959589 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-cf78bc599-cbb7h" podStartSLOduration=5.234941165 podStartE2EDuration="8.959569172s" podCreationTimestamp="2026-02-14 04:32:39 +0000 UTC" firstStartedPulling="2026-02-14 04:32:41.416850506 +0000 UTC m=+1393.497787820" lastFinishedPulling="2026-02-14 04:32:45.141478513 +0000 UTC m=+1397.222415827" observedRunningTime="2026-02-14 04:32:47.954335921 +0000 UTC m=+1400.035273235" watchObservedRunningTime="2026-02-14 04:32:47.959569172 +0000 UTC m=+1400.040506476" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.113538 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.194472 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.228246 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-combined-ca-bundle\") pod \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\" (UID: \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\") " Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.228438 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-config-data-custom\") pod \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\" (UID: \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\") " Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.228529 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-config-data\") pod \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\" (UID: \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\") " Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.228658 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfcz2\" (UniqueName: \"kubernetes.io/projected/6c28a361-2a59-45f2-baeb-e4d5313b6c17-kube-api-access-tfcz2\") pod \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\" (UID: \"6c28a361-2a59-45f2-baeb-e4d5313b6c17\") " Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.243330 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6c28a361-2a59-45f2-baeb-e4d5313b6c17" (UID: "6c28a361-2a59-45f2-baeb-e4d5313b6c17"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.244567 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c28a361-2a59-45f2-baeb-e4d5313b6c17-kube-api-access-tfcz2" (OuterVolumeSpecName: "kube-api-access-tfcz2") pod "6c28a361-2a59-45f2-baeb-e4d5313b6c17" (UID: "6c28a361-2a59-45f2-baeb-e4d5313b6c17"). InnerVolumeSpecName "kube-api-access-tfcz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.331756 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-config-data-custom\") pod \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\" (UID: \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\") " Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.341921 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-config-data\") pod \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\" (UID: \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\") " Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.342068 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrk9t\" (UniqueName: \"kubernetes.io/projected/4fd29ee2-33af-4629-8c0d-fa62c0e07240-kube-api-access-hrk9t\") pod \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\" (UID: \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\") " Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.342104 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-combined-ca-bundle\") pod \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\" (UID: \"4fd29ee2-33af-4629-8c0d-fa62c0e07240\") " Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.343923 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfcz2\" (UniqueName: \"kubernetes.io/projected/6c28a361-2a59-45f2-baeb-e4d5313b6c17-kube-api-access-tfcz2\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.343948 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.387429 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4fd29ee2-33af-4629-8c0d-fa62c0e07240" (UID: "4fd29ee2-33af-4629-8c0d-fa62c0e07240"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.436343 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fd29ee2-33af-4629-8c0d-fa62c0e07240-kube-api-access-hrk9t" (OuterVolumeSpecName: "kube-api-access-hrk9t") pod "4fd29ee2-33af-4629-8c0d-fa62c0e07240" (UID: "4fd29ee2-33af-4629-8c0d-fa62c0e07240"). InnerVolumeSpecName "kube-api-access-hrk9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.440572 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.440864 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" containerName="glance-log" containerID="cri-o://70953f2317efbfb87d7a56f4d71c52385c4847b32874288de71ce95ba977de9e" gracePeriod=30 Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.441463 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" containerName="glance-httpd" containerID="cri-o://784cfaee3c31733050d3a1efb21352103c907f523d29c5e564d74f7dfef79bf4" gracePeriod=30 Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.446765 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.446796 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrk9t\" (UniqueName: \"kubernetes.io/projected/4fd29ee2-33af-4629-8c0d-fa62c0e07240-kube-api-access-hrk9t\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.475013 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-config-data" (OuterVolumeSpecName: "config-data") pod "6c28a361-2a59-45f2-baeb-e4d5313b6c17" (UID: "6c28a361-2a59-45f2-baeb-e4d5313b6c17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.482706 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c28a361-2a59-45f2-baeb-e4d5313b6c17" (UID: "6c28a361-2a59-45f2-baeb-e4d5313b6c17"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.524360 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-config-data" (OuterVolumeSpecName: "config-data") pod "4fd29ee2-33af-4629-8c0d-fa62c0e07240" (UID: "4fd29ee2-33af-4629-8c0d-fa62c0e07240"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.530722 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4fd29ee2-33af-4629-8c0d-fa62c0e07240" (UID: "4fd29ee2-33af-4629-8c0d-fa62c0e07240"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.552248 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.552282 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fd29ee2-33af-4629-8c0d-fa62c0e07240-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.552295 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.552305 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c28a361-2a59-45f2-baeb-e4d5313b6c17-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.935364 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.978754 4867 generic.go:334] "Generic (PLEG): container finished" podID="746b9097-84d0-4d00-a92c-808df9206d8a" containerID="bbbd663b685e649aba7c6d25d4f5d873760ad4fd5bfb839326762e3bc9aeb52d" exitCode=0 Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.979950 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.980744 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" event={"ID":"746b9097-84d0-4d00-a92c-808df9206d8a","Type":"ContainerDied","Data":"bbbd663b685e649aba7c6d25d4f5d873760ad4fd5bfb839326762e3bc9aeb52d"} Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.980785 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-pq99b" event={"ID":"746b9097-84d0-4d00-a92c-808df9206d8a","Type":"ContainerDied","Data":"9ac4c13dc3497256b1b6cb1aa9076b705851041e05e9b02af05f329d0735ed8b"} Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.980805 4867 scope.go:117] "RemoveContainer" containerID="bbbd663b685e649aba7c6d25d4f5d873760ad4fd5bfb839326762e3bc9aeb52d" Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.997989 4867 generic.go:334] "Generic (PLEG): container finished" podID="bf9a1d71-05e1-40ab-90a7-530d2083fe14" containerID="c6319e5096476af1cd4794441879b2c2e802659e71c263027bc77f340f5ae574" exitCode=1 Feb 14 04:32:48 crc kubenswrapper[4867]: I0214 04:32:48.998813 4867 scope.go:117] "RemoveContainer" containerID="c6319e5096476af1cd4794441879b2c2e802659e71c263027bc77f340f5ae574" Feb 14 04:32:48 crc kubenswrapper[4867]: E0214 04:32:48.999126 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-8f9d657ff-n8g4q_openstack(bf9a1d71-05e1-40ab-90a7-530d2083fe14)\"" pod="openstack/heat-api-8f9d657ff-n8g4q" podUID="bf9a1d71-05e1-40ab-90a7-530d2083fe14" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.011865 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.031572 4867 generic.go:334] "Generic (PLEG): container finished" podID="4e650fa8-a893-47e0-a5d5-0df60430ea9e" containerID="f3518d1f2b8b6a76c52e19fef766123bf78780e7f313d75009ab8059dc0d7904" exitCode=1 Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.043045 4867 scope.go:117] "RemoveContainer" containerID="f3518d1f2b8b6a76c52e19fef766123bf78780e7f313d75009ab8059dc0d7904" Feb 14 04:32:49 crc kubenswrapper[4867]: E0214 04:32:49.043308 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-cf78bc599-cbb7h_openstack(4e650fa8-a893-47e0-a5d5-0df60430ea9e)\"" pod="openstack/heat-cfnapi-cf78bc599-cbb7h" podUID="4e650fa8-a893-47e0-a5d5-0df60430ea9e" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.045891 4867 generic.go:334] "Generic (PLEG): container finished" podID="7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" containerID="70953f2317efbfb87d7a56f4d71c52385c4847b32874288de71ce95ba977de9e" exitCode=143 Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.054094 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8f9d657ff-n8g4q" event={"ID":"bf9a1d71-05e1-40ab-90a7-530d2083fe14","Type":"ContainerDied","Data":"c6319e5096476af1cd4794441879b2c2e802659e71c263027bc77f340f5ae574"} Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.054144 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74c87bfcc9-g5dr4" event={"ID":"6c28a361-2a59-45f2-baeb-e4d5313b6c17","Type":"ContainerDied","Data":"279dcb9c4b235ad9ee4d170269ff377a20b494792ae727e8d6532186bac5ba51"} Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.054162 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-cf78bc599-cbb7h" event={"ID":"4e650fa8-a893-47e0-a5d5-0df60430ea9e","Type":"ContainerDied","Data":"f3518d1f2b8b6a76c52e19fef766123bf78780e7f313d75009ab8059dc0d7904"} Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.054179 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0","Type":"ContainerDied","Data":"70953f2317efbfb87d7a56f4d71c52385c4847b32874288de71ce95ba977de9e"} Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.068182 4867 scope.go:117] "RemoveContainer" containerID="5b62b9c4c18730bd95cd769f97a701a763d73eb11bbefea4c9a65847618af00e" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.076588 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-667b98697-gxqph" event={"ID":"4fd29ee2-33af-4629-8c0d-fa62c0e07240","Type":"ContainerDied","Data":"6a911f22f2445bf520e5b58ee0d37ec6810d7143ae0f24d44f2a1ba98f13ca47"} Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.093801 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-667b98697-gxqph" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.097558 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-dns-svc\") pod \"746b9097-84d0-4d00-a92c-808df9206d8a\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.097821 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-ovsdbserver-nb\") pod \"746b9097-84d0-4d00-a92c-808df9206d8a\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.097886 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-ovsdbserver-sb\") pod \"746b9097-84d0-4d00-a92c-808df9206d8a\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.098014 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-dns-swift-storage-0\") pod \"746b9097-84d0-4d00-a92c-808df9206d8a\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.098103 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-config\") pod \"746b9097-84d0-4d00-a92c-808df9206d8a\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.098168 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8rdd\" (UniqueName: \"kubernetes.io/projected/746b9097-84d0-4d00-a92c-808df9206d8a-kube-api-access-j8rdd\") pod \"746b9097-84d0-4d00-a92c-808df9206d8a\" (UID: \"746b9097-84d0-4d00-a92c-808df9206d8a\") " Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.163017 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/746b9097-84d0-4d00-a92c-808df9206d8a-kube-api-access-j8rdd" (OuterVolumeSpecName: "kube-api-access-j8rdd") pod "746b9097-84d0-4d00-a92c-808df9206d8a" (UID: "746b9097-84d0-4d00-a92c-808df9206d8a"). InnerVolumeSpecName "kube-api-access-j8rdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.188871 4867 scope.go:117] "RemoveContainer" containerID="bbbd663b685e649aba7c6d25d4f5d873760ad4fd5bfb839326762e3bc9aeb52d" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.202434 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8rdd\" (UniqueName: \"kubernetes.io/projected/746b9097-84d0-4d00-a92c-808df9206d8a-kube-api-access-j8rdd\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:49 crc kubenswrapper[4867]: E0214 04:32:49.204107 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbbd663b685e649aba7c6d25d4f5d873760ad4fd5bfb839326762e3bc9aeb52d\": container with ID starting with bbbd663b685e649aba7c6d25d4f5d873760ad4fd5bfb839326762e3bc9aeb52d not found: ID does not exist" containerID="bbbd663b685e649aba7c6d25d4f5d873760ad4fd5bfb839326762e3bc9aeb52d" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.204148 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbbd663b685e649aba7c6d25d4f5d873760ad4fd5bfb839326762e3bc9aeb52d"} err="failed to get container status \"bbbd663b685e649aba7c6d25d4f5d873760ad4fd5bfb839326762e3bc9aeb52d\": rpc error: code = NotFound desc = could not find container \"bbbd663b685e649aba7c6d25d4f5d873760ad4fd5bfb839326762e3bc9aeb52d\": container with ID starting with bbbd663b685e649aba7c6d25d4f5d873760ad4fd5bfb839326762e3bc9aeb52d not found: ID does not exist" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.204176 4867 scope.go:117] "RemoveContainer" containerID="5b62b9c4c18730bd95cd769f97a701a763d73eb11bbefea4c9a65847618af00e" Feb 14 04:32:49 crc kubenswrapper[4867]: E0214 04:32:49.218702 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b62b9c4c18730bd95cd769f97a701a763d73eb11bbefea4c9a65847618af00e\": container with ID starting with 5b62b9c4c18730bd95cd769f97a701a763d73eb11bbefea4c9a65847618af00e not found: ID does not exist" containerID="5b62b9c4c18730bd95cd769f97a701a763d73eb11bbefea4c9a65847618af00e" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.218750 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b62b9c4c18730bd95cd769f97a701a763d73eb11bbefea4c9a65847618af00e"} err="failed to get container status \"5b62b9c4c18730bd95cd769f97a701a763d73eb11bbefea4c9a65847618af00e\": rpc error: code = NotFound desc = could not find container \"5b62b9c4c18730bd95cd769f97a701a763d73eb11bbefea4c9a65847618af00e\": container with ID starting with 5b62b9c4c18730bd95cd769f97a701a763d73eb11bbefea4c9a65847618af00e not found: ID does not exist" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.218783 4867 scope.go:117] "RemoveContainer" containerID="3f0c6b148827ea32a231e9e007d2dafbba391e6afdc3bb2dbabd5ec06a7c50e3" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.277480 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-config" (OuterVolumeSpecName: "config") pod "746b9097-84d0-4d00-a92c-808df9206d8a" (UID: "746b9097-84d0-4d00-a92c-808df9206d8a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.306031 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.314988 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "746b9097-84d0-4d00-a92c-808df9206d8a" (UID: "746b9097-84d0-4d00-a92c-808df9206d8a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.362146 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "746b9097-84d0-4d00-a92c-808df9206d8a" (UID: "746b9097-84d0-4d00-a92c-808df9206d8a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.391710 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-667b98697-gxqph"] Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.407652 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.407682 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.415777 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-667b98697-gxqph"] Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.438133 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "746b9097-84d0-4d00-a92c-808df9206d8a" (UID: "746b9097-84d0-4d00-a92c-808df9206d8a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.443581 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-74c87bfcc9-g5dr4"] Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.449737 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "746b9097-84d0-4d00-a92c-808df9206d8a" (UID: "746b9097-84d0-4d00-a92c-808df9206d8a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.461572 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-74c87bfcc9-g5dr4"] Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.478574 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vwg9c"] Feb 14 04:32:49 crc kubenswrapper[4867]: E0214 04:32:49.479077 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80c71d92-a9d1-4256-b7be-678dc34d1562" containerName="mariadb-account-create-update" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.479094 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="80c71d92-a9d1-4256-b7be-678dc34d1562" containerName="mariadb-account-create-update" Feb 14 04:32:49 crc kubenswrapper[4867]: E0214 04:32:49.479108 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="041c55d6-87c7-47b4-a53b-9b38cb85e3d2" containerName="mariadb-account-create-update" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.479113 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="041c55d6-87c7-47b4-a53b-9b38cb85e3d2" containerName="mariadb-account-create-update" Feb 14 04:32:49 crc kubenswrapper[4867]: E0214 04:32:49.479134 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="730dbd9b-ddff-4d09-89ff-b9135ed83042" containerName="mariadb-database-create" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.479140 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="730dbd9b-ddff-4d09-89ff-b9135ed83042" containerName="mariadb-database-create" Feb 14 04:32:49 crc kubenswrapper[4867]: E0214 04:32:49.479149 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="746b9097-84d0-4d00-a92c-808df9206d8a" containerName="init" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.479155 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="746b9097-84d0-4d00-a92c-808df9206d8a" containerName="init" Feb 14 04:32:49 crc kubenswrapper[4867]: E0214 04:32:49.479170 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fd29ee2-33af-4629-8c0d-fa62c0e07240" containerName="heat-api" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.479176 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fd29ee2-33af-4629-8c0d-fa62c0e07240" containerName="heat-api" Feb 14 04:32:49 crc kubenswrapper[4867]: E0214 04:32:49.479188 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c28a361-2a59-45f2-baeb-e4d5313b6c17" containerName="heat-cfnapi" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.479193 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c28a361-2a59-45f2-baeb-e4d5313b6c17" containerName="heat-cfnapi" Feb 14 04:32:49 crc kubenswrapper[4867]: E0214 04:32:49.479216 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="708fbc3f-a05a-4b29-b455-32db117495d1" containerName="mariadb-database-create" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.479221 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="708fbc3f-a05a-4b29-b455-32db117495d1" containerName="mariadb-database-create" Feb 14 04:32:49 crc kubenswrapper[4867]: E0214 04:32:49.479235 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="746b9097-84d0-4d00-a92c-808df9206d8a" containerName="dnsmasq-dns" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.479241 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="746b9097-84d0-4d00-a92c-808df9206d8a" containerName="dnsmasq-dns" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.479454 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="746b9097-84d0-4d00-a92c-808df9206d8a" containerName="dnsmasq-dns" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.479463 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="041c55d6-87c7-47b4-a53b-9b38cb85e3d2" containerName="mariadb-account-create-update" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.479476 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="730dbd9b-ddff-4d09-89ff-b9135ed83042" containerName="mariadb-database-create" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.479488 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c28a361-2a59-45f2-baeb-e4d5313b6c17" containerName="heat-cfnapi" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.479518 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="80c71d92-a9d1-4256-b7be-678dc34d1562" containerName="mariadb-account-create-update" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.479531 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="708fbc3f-a05a-4b29-b455-32db117495d1" containerName="mariadb-database-create" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.479543 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fd29ee2-33af-4629-8c0d-fa62c0e07240" containerName="heat-api" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.480345 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vwg9c" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.483750 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-fspzg" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.483976 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.486217 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.491392 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vwg9c"] Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.510069 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.510102 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/746b9097-84d0-4d00-a92c-808df9206d8a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.536614 4867 scope.go:117] "RemoveContainer" containerID="d3e9e4355332a2ff0d32cacebbea6af4c97294a693e97fe84efa4e22b02595f6" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.579971 4867 scope.go:117] "RemoveContainer" containerID="5853e720bee74ecffda2b3607cd04e8d46a528baf84e2f915f4143c80e908cce" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.627416 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-config-data\") pod \"nova-cell0-conductor-db-sync-vwg9c\" (UID: \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\") " pod="openstack/nova-cell0-conductor-db-sync-vwg9c" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.627606 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vwg9c\" (UID: \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\") " pod="openstack/nova-cell0-conductor-db-sync-vwg9c" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.627782 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbwhx\" (UniqueName: \"kubernetes.io/projected/cd08e0e3-a41f-4b25-b71a-1c968410d52e-kube-api-access-lbwhx\") pod \"nova-cell0-conductor-db-sync-vwg9c\" (UID: \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\") " pod="openstack/nova-cell0-conductor-db-sync-vwg9c" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.627825 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-scripts\") pod \"nova-cell0-conductor-db-sync-vwg9c\" (UID: \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\") " pod="openstack/nova-cell0-conductor-db-sync-vwg9c" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.713689 4867 scope.go:117] "RemoveContainer" containerID="25e73f24691faede9bee41e1ee55092d6468385d3e91a16311d3419e479b9ed4" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.726614 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-pq99b"] Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.729730 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-config-data\") pod \"nova-cell0-conductor-db-sync-vwg9c\" (UID: \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\") " pod="openstack/nova-cell0-conductor-db-sync-vwg9c" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.729782 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vwg9c\" (UID: \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\") " pod="openstack/nova-cell0-conductor-db-sync-vwg9c" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.729851 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbwhx\" (UniqueName: \"kubernetes.io/projected/cd08e0e3-a41f-4b25-b71a-1c968410d52e-kube-api-access-lbwhx\") pod \"nova-cell0-conductor-db-sync-vwg9c\" (UID: \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\") " pod="openstack/nova-cell0-conductor-db-sync-vwg9c" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.729872 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-scripts\") pod \"nova-cell0-conductor-db-sync-vwg9c\" (UID: \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\") " pod="openstack/nova-cell0-conductor-db-sync-vwg9c" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.736739 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-pq99b"] Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.739181 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-vwg9c\" (UID: \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\") " pod="openstack/nova-cell0-conductor-db-sync-vwg9c" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.742409 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-scripts\") pod \"nova-cell0-conductor-db-sync-vwg9c\" (UID: \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\") " pod="openstack/nova-cell0-conductor-db-sync-vwg9c" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.747085 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-config-data\") pod \"nova-cell0-conductor-db-sync-vwg9c\" (UID: \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\") " pod="openstack/nova-cell0-conductor-db-sync-vwg9c" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.761082 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbwhx\" (UniqueName: \"kubernetes.io/projected/cd08e0e3-a41f-4b25-b71a-1c968410d52e-kube-api-access-lbwhx\") pod \"nova-cell0-conductor-db-sync-vwg9c\" (UID: \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\") " pod="openstack/nova-cell0-conductor-db-sync-vwg9c" Feb 14 04:32:49 crc kubenswrapper[4867]: I0214 04:32:49.823086 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vwg9c" Feb 14 04:32:50 crc kubenswrapper[4867]: I0214 04:32:50.051072 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:50 crc kubenswrapper[4867]: I0214 04:32:50.095085 4867 scope.go:117] "RemoveContainer" containerID="c6319e5096476af1cd4794441879b2c2e802659e71c263027bc77f340f5ae574" Feb 14 04:32:50 crc kubenswrapper[4867]: E0214 04:32:50.095368 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-8f9d657ff-n8g4q_openstack(bf9a1d71-05e1-40ab-90a7-530d2083fe14)\"" pod="openstack/heat-api-8f9d657ff-n8g4q" podUID="bf9a1d71-05e1-40ab-90a7-530d2083fe14" Feb 14 04:32:50 crc kubenswrapper[4867]: I0214 04:32:50.113530 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f5e42dca-0c7d-485a-95bc-b26db4e12369","Type":"ContainerStarted","Data":"c2581e3d0590fbe6419e0dbf9d06af960b2f1cc3d05ee57976db9265c7418fa0"} Feb 14 04:32:50 crc kubenswrapper[4867]: I0214 04:32:50.116809 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30f61907-9cb4-4873-99eb-bbb5adf21fcb","Type":"ContainerStarted","Data":"cb6215577dbf26db944d9b9070ec6a13180867b3c5b2b1bcee4c08837896c2c9"} Feb 14 04:32:50 crc kubenswrapper[4867]: I0214 04:32:50.117415 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 04:32:50 crc kubenswrapper[4867]: I0214 04:32:50.118532 4867 scope.go:117] "RemoveContainer" containerID="f3518d1f2b8b6a76c52e19fef766123bf78780e7f313d75009ab8059dc0d7904" Feb 14 04:32:50 crc kubenswrapper[4867]: E0214 04:32:50.118778 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-cf78bc599-cbb7h_openstack(4e650fa8-a893-47e0-a5d5-0df60430ea9e)\"" pod="openstack/heat-cfnapi-cf78bc599-cbb7h" podUID="4e650fa8-a893-47e0-a5d5-0df60430ea9e" Feb 14 04:32:50 crc kubenswrapper[4867]: I0214 04:32:50.146703 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:50 crc kubenswrapper[4867]: I0214 04:32:50.146806 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:50 crc kubenswrapper[4867]: I0214 04:32:50.164640 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:50 crc kubenswrapper[4867]: I0214 04:32:50.171075 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.997061285 podStartE2EDuration="12.171057351s" podCreationTimestamp="2026-02-14 04:32:38 +0000 UTC" firstStartedPulling="2026-02-14 04:32:40.250704873 +0000 UTC m=+1392.331642187" lastFinishedPulling="2026-02-14 04:32:48.424700939 +0000 UTC m=+1400.505638253" observedRunningTime="2026-02-14 04:32:50.168379979 +0000 UTC m=+1402.249317303" watchObservedRunningTime="2026-02-14 04:32:50.171057351 +0000 UTC m=+1402.251994665" Feb 14 04:32:50 crc kubenswrapper[4867]: I0214 04:32:50.213873 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.213845941 podStartE2EDuration="6.213845941s" podCreationTimestamp="2026-02-14 04:32:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:50.19595741 +0000 UTC m=+1402.276894724" watchObservedRunningTime="2026-02-14 04:32:50.213845941 +0000 UTC m=+1402.294783265" Feb 14 04:32:50 crc kubenswrapper[4867]: I0214 04:32:50.433237 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vwg9c"] Feb 14 04:32:51 crc kubenswrapper[4867]: I0214 04:32:51.012542 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fd29ee2-33af-4629-8c0d-fa62c0e07240" path="/var/lib/kubelet/pods/4fd29ee2-33af-4629-8c0d-fa62c0e07240/volumes" Feb 14 04:32:51 crc kubenswrapper[4867]: I0214 04:32:51.014363 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c28a361-2a59-45f2-baeb-e4d5313b6c17" path="/var/lib/kubelet/pods/6c28a361-2a59-45f2-baeb-e4d5313b6c17/volumes" Feb 14 04:32:51 crc kubenswrapper[4867]: I0214 04:32:51.015052 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="746b9097-84d0-4d00-a92c-808df9206d8a" path="/var/lib/kubelet/pods/746b9097-84d0-4d00-a92c-808df9206d8a/volumes" Feb 14 04:32:51 crc kubenswrapper[4867]: I0214 04:32:51.138239 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vwg9c" event={"ID":"cd08e0e3-a41f-4b25-b71a-1c968410d52e","Type":"ContainerStarted","Data":"bd096683847f90cf05e85285ccd82cb246a3d9366805a56c5de6b41e0584b142"} Feb 14 04:32:51 crc kubenswrapper[4867]: I0214 04:32:51.138733 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerName="ceilometer-central-agent" containerID="cri-o://979729ed029e7493c86fa97c73b6e4c07235cd2c42a9dffb387845d8efe2d144" gracePeriod=30 Feb 14 04:32:51 crc kubenswrapper[4867]: I0214 04:32:51.138776 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerName="proxy-httpd" containerID="cri-o://cb6215577dbf26db944d9b9070ec6a13180867b3c5b2b1bcee4c08837896c2c9" gracePeriod=30 Feb 14 04:32:51 crc kubenswrapper[4867]: I0214 04:32:51.138838 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerName="ceilometer-notification-agent" containerID="cri-o://02a21d192b838bcd292ed433b9bda0d9ab33f8abcba6bd3963579f24d84fa41e" gracePeriod=30 Feb 14 04:32:51 crc kubenswrapper[4867]: I0214 04:32:51.138814 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerName="sg-core" containerID="cri-o://cb1ce511267c1e1bb4cd8621896e5bff9f87cd7f132d080f973fbe44eacb8ee4" gracePeriod=30 Feb 14 04:32:51 crc kubenswrapper[4867]: I0214 04:32:51.139108 4867 scope.go:117] "RemoveContainer" containerID="c6319e5096476af1cd4794441879b2c2e802659e71c263027bc77f340f5ae574" Feb 14 04:32:51 crc kubenswrapper[4867]: E0214 04:32:51.139544 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-8f9d657ff-n8g4q_openstack(bf9a1d71-05e1-40ab-90a7-530d2083fe14)\"" pod="openstack/heat-api-8f9d657ff-n8g4q" podUID="bf9a1d71-05e1-40ab-90a7-530d2083fe14" Feb 14 04:32:51 crc kubenswrapper[4867]: I0214 04:32:51.139631 4867 scope.go:117] "RemoveContainer" containerID="f3518d1f2b8b6a76c52e19fef766123bf78780e7f313d75009ab8059dc0d7904" Feb 14 04:32:51 crc kubenswrapper[4867]: E0214 04:32:51.139896 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-cf78bc599-cbb7h_openstack(4e650fa8-a893-47e0-a5d5-0df60430ea9e)\"" pod="openstack/heat-cfnapi-cf78bc599-cbb7h" podUID="4e650fa8-a893-47e0-a5d5-0df60430ea9e" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.198496 4867 generic.go:334] "Generic (PLEG): container finished" podID="7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" containerID="784cfaee3c31733050d3a1efb21352103c907f523d29c5e564d74f7dfef79bf4" exitCode=0 Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.198649 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0","Type":"ContainerDied","Data":"784cfaee3c31733050d3a1efb21352103c907f523d29c5e564d74f7dfef79bf4"} Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.205115 4867 generic.go:334] "Generic (PLEG): container finished" podID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerID="cb6215577dbf26db944d9b9070ec6a13180867b3c5b2b1bcee4c08837896c2c9" exitCode=0 Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.205148 4867 generic.go:334] "Generic (PLEG): container finished" podID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerID="cb1ce511267c1e1bb4cd8621896e5bff9f87cd7f132d080f973fbe44eacb8ee4" exitCode=2 Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.205159 4867 generic.go:334] "Generic (PLEG): container finished" podID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerID="02a21d192b838bcd292ed433b9bda0d9ab33f8abcba6bd3963579f24d84fa41e" exitCode=0 Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.205168 4867 generic.go:334] "Generic (PLEG): container finished" podID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerID="979729ed029e7493c86fa97c73b6e4c07235cd2c42a9dffb387845d8efe2d144" exitCode=0 Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.206134 4867 scope.go:117] "RemoveContainer" containerID="f3518d1f2b8b6a76c52e19fef766123bf78780e7f313d75009ab8059dc0d7904" Feb 14 04:32:52 crc kubenswrapper[4867]: E0214 04:32:52.206534 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-cf78bc599-cbb7h_openstack(4e650fa8-a893-47e0-a5d5-0df60430ea9e)\"" pod="openstack/heat-cfnapi-cf78bc599-cbb7h" podUID="4e650fa8-a893-47e0-a5d5-0df60430ea9e" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.207076 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30f61907-9cb4-4873-99eb-bbb5adf21fcb","Type":"ContainerDied","Data":"cb6215577dbf26db944d9b9070ec6a13180867b3c5b2b1bcee4c08837896c2c9"} Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.207109 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30f61907-9cb4-4873-99eb-bbb5adf21fcb","Type":"ContainerDied","Data":"cb1ce511267c1e1bb4cd8621896e5bff9f87cd7f132d080f973fbe44eacb8ee4"} Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.207122 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30f61907-9cb4-4873-99eb-bbb5adf21fcb","Type":"ContainerDied","Data":"02a21d192b838bcd292ed433b9bda0d9ab33f8abcba6bd3963579f24d84fa41e"} Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.207133 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30f61907-9cb4-4873-99eb-bbb5adf21fcb","Type":"ContainerDied","Data":"979729ed029e7493c86fa97c73b6e4c07235cd2c42a9dffb387845d8efe2d144"} Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.207644 4867 scope.go:117] "RemoveContainer" containerID="c6319e5096476af1cd4794441879b2c2e802659e71c263027bc77f340f5ae574" Feb 14 04:32:52 crc kubenswrapper[4867]: E0214 04:32:52.207955 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-8f9d657ff-n8g4q_openstack(bf9a1d71-05e1-40ab-90a7-530d2083fe14)\"" pod="openstack/heat-api-8f9d657ff-n8g4q" podUID="bf9a1d71-05e1-40ab-90a7-530d2083fe14" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.251824 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.403964 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glsgm\" (UniqueName: \"kubernetes.io/projected/30f61907-9cb4-4873-99eb-bbb5adf21fcb-kube-api-access-glsgm\") pod \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.404065 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-scripts\") pod \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.404127 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30f61907-9cb4-4873-99eb-bbb5adf21fcb-log-httpd\") pod \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.404147 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-combined-ca-bundle\") pod \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.404260 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30f61907-9cb4-4873-99eb-bbb5adf21fcb-run-httpd\") pod \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.404284 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-sg-core-conf-yaml\") pod \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.404324 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-config-data\") pod \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\" (UID: \"30f61907-9cb4-4873-99eb-bbb5adf21fcb\") " Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.407876 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30f61907-9cb4-4873-99eb-bbb5adf21fcb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "30f61907-9cb4-4873-99eb-bbb5adf21fcb" (UID: "30f61907-9cb4-4873-99eb-bbb5adf21fcb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.408262 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30f61907-9cb4-4873-99eb-bbb5adf21fcb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "30f61907-9cb4-4873-99eb-bbb5adf21fcb" (UID: "30f61907-9cb4-4873-99eb-bbb5adf21fcb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.413928 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-scripts" (OuterVolumeSpecName: "scripts") pod "30f61907-9cb4-4873-99eb-bbb5adf21fcb" (UID: "30f61907-9cb4-4873-99eb-bbb5adf21fcb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.427959 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30f61907-9cb4-4873-99eb-bbb5adf21fcb-kube-api-access-glsgm" (OuterVolumeSpecName: "kube-api-access-glsgm") pod "30f61907-9cb4-4873-99eb-bbb5adf21fcb" (UID: "30f61907-9cb4-4873-99eb-bbb5adf21fcb"). InnerVolumeSpecName "kube-api-access-glsgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.452590 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.502809 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "30f61907-9cb4-4873-99eb-bbb5adf21fcb" (UID: "30f61907-9cb4-4873-99eb-bbb5adf21fcb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.514893 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glsgm\" (UniqueName: \"kubernetes.io/projected/30f61907-9cb4-4873-99eb-bbb5adf21fcb-kube-api-access-glsgm\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.515726 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.515745 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30f61907-9cb4-4873-99eb-bbb5adf21fcb-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.515758 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30f61907-9cb4-4873-99eb-bbb5adf21fcb-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.515770 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.681555 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30f61907-9cb4-4873-99eb-bbb5adf21fcb" (UID: "30f61907-9cb4-4873-99eb-bbb5adf21fcb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.693804 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-config-data" (OuterVolumeSpecName: "config-data") pod "30f61907-9cb4-4873-99eb-bbb5adf21fcb" (UID: "30f61907-9cb4-4873-99eb-bbb5adf21fcb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.721034 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.721071 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f61907-9cb4-4873-99eb-bbb5adf21fcb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.762185 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.822365 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmjl2\" (UniqueName: \"kubernetes.io/projected/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-kube-api-access-qmjl2\") pod \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.822422 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-internal-tls-certs\") pod \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.822519 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-combined-ca-bundle\") pod \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.822563 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-scripts\") pod \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.822626 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-config-data\") pod \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.826339 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-scripts" (OuterVolumeSpecName: "scripts") pod "7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" (UID: "7f21b5d2-75e5-4cc5-96d0-670e9ed88df0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.826738 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") pod \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.832420 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-httpd-run\") pod \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.832663 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-kube-api-access-qmjl2" (OuterVolumeSpecName: "kube-api-access-qmjl2") pod "7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" (UID: "7f21b5d2-75e5-4cc5-96d0-670e9ed88df0"). InnerVolumeSpecName "kube-api-access-qmjl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.834113 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" (UID: "7f21b5d2-75e5-4cc5-96d0-670e9ed88df0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.834237 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-logs\") pod \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\" (UID: \"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0\") " Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.835449 4867 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.835477 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmjl2\" (UniqueName: \"kubernetes.io/projected/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-kube-api-access-qmjl2\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.835490 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.839078 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-logs" (OuterVolumeSpecName: "logs") pod "7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" (UID: "7f21b5d2-75e5-4cc5-96d0-670e9ed88df0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.860779 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25" (OuterVolumeSpecName: "glance") pod "7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" (UID: "7f21b5d2-75e5-4cc5-96d0-670e9ed88df0"). InnerVolumeSpecName "pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.871535 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" (UID: "7f21b5d2-75e5-4cc5-96d0-670e9ed88df0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.890241 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-config-data" (OuterVolumeSpecName: "config-data") pod "7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" (UID: "7f21b5d2-75e5-4cc5-96d0-670e9ed88df0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.919305 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" (UID: "7f21b5d2-75e5-4cc5-96d0-670e9ed88df0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.937948 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-logs\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.938098 4867 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.938115 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.938126 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.938175 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") on node \"crc\" " Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.969298 4867 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 04:32:52 crc kubenswrapper[4867]: I0214 04:32:52.969449 4867 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25") on node "crc" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.040784 4867 reconciler_common.go:293] "Volume detached for volume \"pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.225280 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30f61907-9cb4-4873-99eb-bbb5adf21fcb","Type":"ContainerDied","Data":"0c811d9a27d93bea50cf31c5a59216074fd035a7dfb9975cb4e0ef8eaca3d79f"} Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.225335 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.225341 4867 scope.go:117] "RemoveContainer" containerID="cb6215577dbf26db944d9b9070ec6a13180867b3c5b2b1bcee4c08837896c2c9" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.231335 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.231305 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7f21b5d2-75e5-4cc5-96d0-670e9ed88df0","Type":"ContainerDied","Data":"a058ad6cbd2191072dd3095571bbab2223991ccf0e5587286e857f99ac25261b"} Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.258395 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.276393 4867 scope.go:117] "RemoveContainer" containerID="cb1ce511267c1e1bb4cd8621896e5bff9f87cd7f132d080f973fbe44eacb8ee4" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.284312 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.303413 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.340521 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.357673 4867 scope.go:117] "RemoveContainer" containerID="02a21d192b838bcd292ed433b9bda0d9ab33f8abcba6bd3963579f24d84fa41e" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.403269 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:53 crc kubenswrapper[4867]: E0214 04:32:53.404380 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerName="ceilometer-central-agent" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.404430 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerName="ceilometer-central-agent" Feb 14 04:32:53 crc kubenswrapper[4867]: E0214 04:32:53.404446 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerName="sg-core" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.404455 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerName="sg-core" Feb 14 04:32:53 crc kubenswrapper[4867]: E0214 04:32:53.404481 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" containerName="glance-httpd" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.404490 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" containerName="glance-httpd" Feb 14 04:32:53 crc kubenswrapper[4867]: E0214 04:32:53.405342 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" containerName="glance-log" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.405359 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" containerName="glance-log" Feb 14 04:32:53 crc kubenswrapper[4867]: E0214 04:32:53.405400 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerName="ceilometer-notification-agent" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.405409 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerName="ceilometer-notification-agent" Feb 14 04:32:53 crc kubenswrapper[4867]: E0214 04:32:53.405427 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerName="proxy-httpd" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.405436 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerName="proxy-httpd" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.405949 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerName="sg-core" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.405972 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" containerName="glance-httpd" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.406009 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" containerName="glance-log" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.406028 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerName="ceilometer-central-agent" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.406049 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerName="ceilometer-notification-agent" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.406084 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" containerName="proxy-httpd" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.409850 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.419243 4867 scope.go:117] "RemoveContainer" containerID="979729ed029e7493c86fa97c73b6e4c07235cd2c42a9dffb387845d8efe2d144" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.420109 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.420630 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.447552 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.455815 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.458543 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.461929 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.467660 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.473097 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.515199 4867 scope.go:117] "RemoveContainer" containerID="784cfaee3c31733050d3a1efb21352103c907f523d29c5e564d74f7dfef79bf4" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.562236 4867 scope.go:117] "RemoveContainer" containerID="70953f2317efbfb87d7a56f4d71c52385c4847b32874288de71ce95ba977de9e" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.566596 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.566640 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h46x6\" (UniqueName: \"kubernetes.io/projected/b66304c6-61a4-4b8b-b77b-dd816c0a0890-kube-api-access-h46x6\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.566663 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-scripts\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.566698 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b66304c6-61a4-4b8b-b77b-dd816c0a0890-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.566900 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b66304c6-61a4-4b8b-b77b-dd816c0a0890-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.567115 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b66304c6-61a4-4b8b-b77b-dd816c0a0890-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.567285 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b66304c6-61a4-4b8b-b77b-dd816c0a0890-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.567373 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-config-data\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.567396 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.567448 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b66304c6-61a4-4b8b-b77b-dd816c0a0890-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.567709 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x4ch\" (UniqueName: \"kubernetes.io/projected/53d13a71-03e0-46f0-9ca1-a868d38727f8-kube-api-access-2x4ch\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.567760 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53d13a71-03e0-46f0-9ca1-a868d38727f8-run-httpd\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.567951 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b66304c6-61a4-4b8b-b77b-dd816c0a0890-logs\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.568028 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.568281 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53d13a71-03e0-46f0-9ca1-a868d38727f8-log-httpd\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.670813 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x4ch\" (UniqueName: \"kubernetes.io/projected/53d13a71-03e0-46f0-9ca1-a868d38727f8-kube-api-access-2x4ch\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.670859 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53d13a71-03e0-46f0-9ca1-a868d38727f8-run-httpd\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.670902 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b66304c6-61a4-4b8b-b77b-dd816c0a0890-logs\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.670927 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.671011 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53d13a71-03e0-46f0-9ca1-a868d38727f8-log-httpd\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.671034 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.671061 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h46x6\" (UniqueName: \"kubernetes.io/projected/b66304c6-61a4-4b8b-b77b-dd816c0a0890-kube-api-access-h46x6\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.671079 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-scripts\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.671110 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b66304c6-61a4-4b8b-b77b-dd816c0a0890-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.671149 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b66304c6-61a4-4b8b-b77b-dd816c0a0890-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.671190 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b66304c6-61a4-4b8b-b77b-dd816c0a0890-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.671249 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b66304c6-61a4-4b8b-b77b-dd816c0a0890-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.671282 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-config-data\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.671302 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.671335 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b66304c6-61a4-4b8b-b77b-dd816c0a0890-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.671435 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b66304c6-61a4-4b8b-b77b-dd816c0a0890-logs\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.671724 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53d13a71-03e0-46f0-9ca1-a868d38727f8-log-httpd\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.672249 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b66304c6-61a4-4b8b-b77b-dd816c0a0890-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.672995 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53d13a71-03e0-46f0-9ca1-a868d38727f8-run-httpd\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.676567 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.681407 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-scripts\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.682852 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b66304c6-61a4-4b8b-b77b-dd816c0a0890-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.683025 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.683055 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/75d9da1254ce7e619341632ffa065d218ee4aa27b9558c722e4cc97bdf7e072d/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.683195 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b66304c6-61a4-4b8b-b77b-dd816c0a0890-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.684151 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-config-data\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.684644 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b66304c6-61a4-4b8b-b77b-dd816c0a0890-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.705982 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b66304c6-61a4-4b8b-b77b-dd816c0a0890-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.708498 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.711091 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x4ch\" (UniqueName: \"kubernetes.io/projected/53d13a71-03e0-46f0-9ca1-a868d38727f8-kube-api-access-2x4ch\") pod \"ceilometer-0\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.727490 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h46x6\" (UniqueName: \"kubernetes.io/projected/b66304c6-61a4-4b8b-b77b-dd816c0a0890-kube-api-access-h46x6\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.775525 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0f267588-31b6-42e6-a1eb-3b23ad395d25\") pod \"glance-default-internal-api-0\" (UID: \"b66304c6-61a4-4b8b-b77b-dd816c0a0890\") " pod="openstack/glance-default-internal-api-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.811615 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:32:53 crc kubenswrapper[4867]: I0214 04:32:53.818260 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 14 04:32:54 crc kubenswrapper[4867]: I0214 04:32:54.402562 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:32:54 crc kubenswrapper[4867]: W0214 04:32:54.415841 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53d13a71_03e0_46f0_9ca1_a868d38727f8.slice/crio-d7f626293a253c0f81c7bd94b01af430ab3e2653b40c33393d86f55218de6f1d WatchSource:0}: Error finding container d7f626293a253c0f81c7bd94b01af430ab3e2653b40c33393d86f55218de6f1d: Status 404 returned error can't find the container with id d7f626293a253c0f81c7bd94b01af430ab3e2653b40c33393d86f55218de6f1d Feb 14 04:32:54 crc kubenswrapper[4867]: I0214 04:32:54.549968 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 14 04:32:54 crc kubenswrapper[4867]: I0214 04:32:54.711315 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 14 04:32:54 crc kubenswrapper[4867]: I0214 04:32:54.711368 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 14 04:32:54 crc kubenswrapper[4867]: I0214 04:32:54.756040 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 14 04:32:54 crc kubenswrapper[4867]: I0214 04:32:54.775191 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 14 04:32:55 crc kubenswrapper[4867]: I0214 04:32:55.024162 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30f61907-9cb4-4873-99eb-bbb5adf21fcb" path="/var/lib/kubelet/pods/30f61907-9cb4-4873-99eb-bbb5adf21fcb/volumes" Feb 14 04:32:55 crc kubenswrapper[4867]: I0214 04:32:55.025338 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f21b5d2-75e5-4cc5-96d0-670e9ed88df0" path="/var/lib/kubelet/pods/7f21b5d2-75e5-4cc5-96d0-670e9ed88df0/volumes" Feb 14 04:32:55 crc kubenswrapper[4867]: I0214 04:32:55.362212 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b66304c6-61a4-4b8b-b77b-dd816c0a0890","Type":"ContainerStarted","Data":"1ea57d131344734c752fa842970f47956e33a4cc4a22d8305307580b76055219"} Feb 14 04:32:55 crc kubenswrapper[4867]: I0214 04:32:55.362269 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b66304c6-61a4-4b8b-b77b-dd816c0a0890","Type":"ContainerStarted","Data":"fc781c3d26b2502683df7246dfdd94466c432563aa0b1a5c5fc2fb6ceabe2b9a"} Feb 14 04:32:55 crc kubenswrapper[4867]: I0214 04:32:55.368221 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53d13a71-03e0-46f0-9ca1-a868d38727f8","Type":"ContainerStarted","Data":"c1a49b4b77bae4bd28c8b0c4b6a3607ac206e2b7cf6262bf828fee39c66f8743"} Feb 14 04:32:55 crc kubenswrapper[4867]: I0214 04:32:55.368272 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53d13a71-03e0-46f0-9ca1-a868d38727f8","Type":"ContainerStarted","Data":"d7f626293a253c0f81c7bd94b01af430ab3e2653b40c33393d86f55218de6f1d"} Feb 14 04:32:55 crc kubenswrapper[4867]: I0214 04:32:55.368559 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 14 04:32:55 crc kubenswrapper[4867]: I0214 04:32:55.369592 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 14 04:32:55 crc kubenswrapper[4867]: I0214 04:32:55.484083 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:32:55 crc kubenswrapper[4867]: I0214 04:32:55.578820 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-8f9d657ff-n8g4q"] Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.324568 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.407439 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8f9d657ff-n8g4q" event={"ID":"bf9a1d71-05e1-40ab-90a7-530d2083fe14","Type":"ContainerDied","Data":"da29745824d45aedf75030755306f42e86da161913c87bf4c3798a011179b320"} Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.407518 4867 scope.go:117] "RemoveContainer" containerID="c6319e5096476af1cd4794441879b2c2e802659e71c263027bc77f340f5ae574" Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.407810 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8f9d657ff-n8g4q" Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.409294 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-config-data\") pod \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\" (UID: \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\") " Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.409403 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-combined-ca-bundle\") pod \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\" (UID: \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\") " Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.409481 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-config-data-custom\") pod \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\" (UID: \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\") " Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.409632 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxrqv\" (UniqueName: \"kubernetes.io/projected/bf9a1d71-05e1-40ab-90a7-530d2083fe14-kube-api-access-jxrqv\") pod \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\" (UID: \"bf9a1d71-05e1-40ab-90a7-530d2083fe14\") " Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.421609 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf9a1d71-05e1-40ab-90a7-530d2083fe14-kube-api-access-jxrqv" (OuterVolumeSpecName: "kube-api-access-jxrqv") pod "bf9a1d71-05e1-40ab-90a7-530d2083fe14" (UID: "bf9a1d71-05e1-40ab-90a7-530d2083fe14"). InnerVolumeSpecName "kube-api-access-jxrqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.421734 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53d13a71-03e0-46f0-9ca1-a868d38727f8","Type":"ContainerStarted","Data":"9acb43ba5b6f8734f595001c299ec6b770fd758556efb397ef7af5d7c1128490"} Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.440404 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bf9a1d71-05e1-40ab-90a7-530d2083fe14" (UID: "bf9a1d71-05e1-40ab-90a7-530d2083fe14"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.485643 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf9a1d71-05e1-40ab-90a7-530d2083fe14" (UID: "bf9a1d71-05e1-40ab-90a7-530d2083fe14"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.566839 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.566879 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.566889 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxrqv\" (UniqueName: \"kubernetes.io/projected/bf9a1d71-05e1-40ab-90a7-530d2083fe14-kube-api-access-jxrqv\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.618704 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-config-data" (OuterVolumeSpecName: "config-data") pod "bf9a1d71-05e1-40ab-90a7-530d2083fe14" (UID: "bf9a1d71-05e1-40ab-90a7-530d2083fe14"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.671486 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf9a1d71-05e1-40ab-90a7-530d2083fe14-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.807570 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-8f9d657ff-n8g4q"] Feb 14 04:32:56 crc kubenswrapper[4867]: I0214 04:32:56.823210 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-8f9d657ff-n8g4q"] Feb 14 04:32:57 crc kubenswrapper[4867]: I0214 04:32:57.016579 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf9a1d71-05e1-40ab-90a7-530d2083fe14" path="/var/lib/kubelet/pods/bf9a1d71-05e1-40ab-90a7-530d2083fe14/volumes" Feb 14 04:32:57 crc kubenswrapper[4867]: I0214 04:32:57.269909 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:32:57 crc kubenswrapper[4867]: I0214 04:32:57.342048 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-cf78bc599-cbb7h"] Feb 14 04:32:57 crc kubenswrapper[4867]: I0214 04:32:57.520608 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53d13a71-03e0-46f0-9ca1-a868d38727f8","Type":"ContainerStarted","Data":"7a99e95642b07c831a11e55d61a4998c2a981443de40d98a474d53cd803563e3"} Feb 14 04:32:57 crc kubenswrapper[4867]: I0214 04:32:57.561662 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 04:32:57 crc kubenswrapper[4867]: I0214 04:32:57.562030 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b66304c6-61a4-4b8b-b77b-dd816c0a0890","Type":"ContainerStarted","Data":"ba02f048f072a54327e598e13510a8bb3841c70f454f4bda93b06c6a6f71f60d"} Feb 14 04:32:57 crc kubenswrapper[4867]: I0214 04:32:57.562067 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 04:32:57 crc kubenswrapper[4867]: I0214 04:32:57.620113 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.6200870080000005 podStartE2EDuration="4.620087008s" podCreationTimestamp="2026-02-14 04:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:32:57.595717513 +0000 UTC m=+1409.676654817" watchObservedRunningTime="2026-02-14 04:32:57.620087008 +0000 UTC m=+1409.701024322" Feb 14 04:32:57 crc kubenswrapper[4867]: I0214 04:32:57.958109 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.020777 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh87m\" (UniqueName: \"kubernetes.io/projected/4e650fa8-a893-47e0-a5d5-0df60430ea9e-kube-api-access-mh87m\") pod \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.021411 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-combined-ca-bundle\") pod \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.021441 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-config-data\") pod \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.021472 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-config-data-custom\") pod \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.028866 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4e650fa8-a893-47e0-a5d5-0df60430ea9e" (UID: "4e650fa8-a893-47e0-a5d5-0df60430ea9e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.056094 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e650fa8-a893-47e0-a5d5-0df60430ea9e-kube-api-access-mh87m" (OuterVolumeSpecName: "kube-api-access-mh87m") pod "4e650fa8-a893-47e0-a5d5-0df60430ea9e" (UID: "4e650fa8-a893-47e0-a5d5-0df60430ea9e"). InnerVolumeSpecName "kube-api-access-mh87m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.121244 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e650fa8-a893-47e0-a5d5-0df60430ea9e" (UID: "4e650fa8-a893-47e0-a5d5-0df60430ea9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.123645 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-config-data" (OuterVolumeSpecName: "config-data") pod "4e650fa8-a893-47e0-a5d5-0df60430ea9e" (UID: "4e650fa8-a893-47e0-a5d5-0df60430ea9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.123937 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-config-data\") pod \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\" (UID: \"4e650fa8-a893-47e0-a5d5-0df60430ea9e\") " Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.125259 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mh87m\" (UniqueName: \"kubernetes.io/projected/4e650fa8-a893-47e0-a5d5-0df60430ea9e-kube-api-access-mh87m\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.125287 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.125301 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:58 crc kubenswrapper[4867]: W0214 04:32:58.125983 4867 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/4e650fa8-a893-47e0-a5d5-0df60430ea9e/volumes/kubernetes.io~secret/config-data Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.126014 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-config-data" (OuterVolumeSpecName: "config-data") pod "4e650fa8-a893-47e0-a5d5-0df60430ea9e" (UID: "4e650fa8-a893-47e0-a5d5-0df60430ea9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.227389 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e650fa8-a893-47e0-a5d5-0df60430ea9e-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.602068 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-cf78bc599-cbb7h" event={"ID":"4e650fa8-a893-47e0-a5d5-0df60430ea9e","Type":"ContainerDied","Data":"a2992054f9a747435b4dfa57d015a5d3a94fc0840d14d8df3c6c61038a7f9365"} Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.602131 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-cf78bc599-cbb7h" Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.602182 4867 scope.go:117] "RemoveContainer" containerID="f3518d1f2b8b6a76c52e19fef766123bf78780e7f313d75009ab8059dc0d7904" Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.663072 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-cf78bc599-cbb7h"] Feb 14 04:32:58 crc kubenswrapper[4867]: I0214 04:32:58.692619 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-cf78bc599-cbb7h"] Feb 14 04:32:59 crc kubenswrapper[4867]: I0214 04:32:59.025642 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e650fa8-a893-47e0-a5d5-0df60430ea9e" path="/var/lib/kubelet/pods/4e650fa8-a893-47e0-a5d5-0df60430ea9e/volumes" Feb 14 04:33:00 crc kubenswrapper[4867]: I0214 04:33:00.102731 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 14 04:33:00 crc kubenswrapper[4867]: I0214 04:33:00.103169 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 04:33:00 crc kubenswrapper[4867]: I0214 04:33:00.203346 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:33:00 crc kubenswrapper[4867]: I0214 04:33:00.241000 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 14 04:33:00 crc kubenswrapper[4867]: I0214 04:33:00.292847 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-677c4ffcdf-n44s6"] Feb 14 04:33:00 crc kubenswrapper[4867]: I0214 04:33:00.293057 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-677c4ffcdf-n44s6" podUID="a2ce3fe5-1f15-484b-a608-da9f03d714c9" containerName="heat-engine" containerID="cri-o://6a3313dda26c1a2d9982bba482eb657c4e81d8dc170b8fa9912ec40df49eb639" gracePeriod=60 Feb 14 04:33:02 crc kubenswrapper[4867]: E0214 04:33:02.408087 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a3313dda26c1a2d9982bba482eb657c4e81d8dc170b8fa9912ec40df49eb639" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 14 04:33:02 crc kubenswrapper[4867]: E0214 04:33:02.411213 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a3313dda26c1a2d9982bba482eb657c4e81d8dc170b8fa9912ec40df49eb639" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 14 04:33:02 crc kubenswrapper[4867]: E0214 04:33:02.416663 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6a3313dda26c1a2d9982bba482eb657c4e81d8dc170b8fa9912ec40df49eb639" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 14 04:33:02 crc kubenswrapper[4867]: E0214 04:33:02.416731 4867 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-677c4ffcdf-n44s6" podUID="a2ce3fe5-1f15-484b-a608-da9f03d714c9" containerName="heat-engine" Feb 14 04:33:03 crc kubenswrapper[4867]: I0214 04:33:03.819334 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 14 04:33:03 crc kubenswrapper[4867]: I0214 04:33:03.822104 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 14 04:33:03 crc kubenswrapper[4867]: I0214 04:33:03.897296 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 14 04:33:03 crc kubenswrapper[4867]: I0214 04:33:03.901682 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 14 04:33:04 crc kubenswrapper[4867]: I0214 04:33:04.657410 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:33:04 crc kubenswrapper[4867]: I0214 04:33:04.777567 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 14 04:33:04 crc kubenswrapper[4867]: I0214 04:33:04.777987 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 14 04:33:08 crc kubenswrapper[4867]: I0214 04:33:08.456834 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 14 04:33:08 crc kubenswrapper[4867]: I0214 04:33:08.458010 4867 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 14 04:33:08 crc kubenswrapper[4867]: I0214 04:33:08.462793 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 14 04:33:08 crc kubenswrapper[4867]: I0214 04:33:08.856763 4867 generic.go:334] "Generic (PLEG): container finished" podID="a2ce3fe5-1f15-484b-a608-da9f03d714c9" containerID="6a3313dda26c1a2d9982bba482eb657c4e81d8dc170b8fa9912ec40df49eb639" exitCode=0 Feb 14 04:33:08 crc kubenswrapper[4867]: I0214 04:33:08.856855 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-677c4ffcdf-n44s6" event={"ID":"a2ce3fe5-1f15-484b-a608-da9f03d714c9","Type":"ContainerDied","Data":"6a3313dda26c1a2d9982bba482eb657c4e81d8dc170b8fa9912ec40df49eb639"} Feb 14 04:33:09 crc kubenswrapper[4867]: E0214 04:33:09.924149 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" Feb 14 04:33:09 crc kubenswrapper[4867]: E0214 04:33:09.924357 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lbwhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-vwg9c_openstack(cd08e0e3-a41f-4b25-b71a-1c968410d52e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:33:09 crc kubenswrapper[4867]: E0214 04:33:09.925616 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-vwg9c" podUID="cd08e0e3-a41f-4b25-b71a-1c968410d52e" Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.458257 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.548379 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-config-data-custom\") pod \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\" (UID: \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\") " Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.548468 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-config-data\") pod \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\" (UID: \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\") " Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.548636 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgmf8\" (UniqueName: \"kubernetes.io/projected/a2ce3fe5-1f15-484b-a608-da9f03d714c9-kube-api-access-lgmf8\") pod \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\" (UID: \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\") " Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.549431 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-combined-ca-bundle\") pod \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\" (UID: \"a2ce3fe5-1f15-484b-a608-da9f03d714c9\") " Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.561320 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2ce3fe5-1f15-484b-a608-da9f03d714c9-kube-api-access-lgmf8" (OuterVolumeSpecName: "kube-api-access-lgmf8") pod "a2ce3fe5-1f15-484b-a608-da9f03d714c9" (UID: "a2ce3fe5-1f15-484b-a608-da9f03d714c9"). InnerVolumeSpecName "kube-api-access-lgmf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.594756 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a2ce3fe5-1f15-484b-a608-da9f03d714c9" (UID: "a2ce3fe5-1f15-484b-a608-da9f03d714c9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.655171 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.656097 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgmf8\" (UniqueName: \"kubernetes.io/projected/a2ce3fe5-1f15-484b-a608-da9f03d714c9-kube-api-access-lgmf8\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.665859 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2ce3fe5-1f15-484b-a608-da9f03d714c9" (UID: "a2ce3fe5-1f15-484b-a608-da9f03d714c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.685272 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-config-data" (OuterVolumeSpecName: "config-data") pod "a2ce3fe5-1f15-484b-a608-da9f03d714c9" (UID: "a2ce3fe5-1f15-484b-a608-da9f03d714c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.758543 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.758588 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2ce3fe5-1f15-484b-a608-da9f03d714c9-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.880288 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-677c4ffcdf-n44s6" Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.880600 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-677c4ffcdf-n44s6" event={"ID":"a2ce3fe5-1f15-484b-a608-da9f03d714c9","Type":"ContainerDied","Data":"5411ca415d9a87d0850d6fbf4033b3de2e9b4aed86c0a53707211fd73a6a37cc"} Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.880666 4867 scope.go:117] "RemoveContainer" containerID="6a3313dda26c1a2d9982bba482eb657c4e81d8dc170b8fa9912ec40df49eb639" Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.895712 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerName="ceilometer-central-agent" containerID="cri-o://c1a49b4b77bae4bd28c8b0c4b6a3607ac206e2b7cf6262bf828fee39c66f8743" gracePeriod=30 Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.896037 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerName="proxy-httpd" containerID="cri-o://b01aeddd7a6627ea9d173d8005e626741ff32516f251c5c2ab496415a659b79e" gracePeriod=30 Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.896094 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerName="sg-core" containerID="cri-o://7a99e95642b07c831a11e55d61a4998c2a981443de40d98a474d53cd803563e3" gracePeriod=30 Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.896130 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerName="ceilometer-notification-agent" containerID="cri-o://9acb43ba5b6f8734f595001c299ec6b770fd758556efb397ef7af5d7c1128490" gracePeriod=30 Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.896174 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53d13a71-03e0-46f0-9ca1-a868d38727f8","Type":"ContainerStarted","Data":"b01aeddd7a6627ea9d173d8005e626741ff32516f251c5c2ab496415a659b79e"} Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.896203 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 04:33:10 crc kubenswrapper[4867]: E0214 04:33:10.914654 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-vwg9c" podUID="cd08e0e3-a41f-4b25-b71a-1c968410d52e" Feb 14 04:33:10 crc kubenswrapper[4867]: I0214 04:33:10.939857 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=13.568828552 podStartE2EDuration="17.939832995s" podCreationTimestamp="2026-02-14 04:32:53 +0000 UTC" firstStartedPulling="2026-02-14 04:32:54.418859845 +0000 UTC m=+1406.499797159" lastFinishedPulling="2026-02-14 04:32:58.789864288 +0000 UTC m=+1410.870801602" observedRunningTime="2026-02-14 04:33:10.92401539 +0000 UTC m=+1423.004952704" watchObservedRunningTime="2026-02-14 04:33:10.939832995 +0000 UTC m=+1423.020770309" Feb 14 04:33:11 crc kubenswrapper[4867]: I0214 04:33:11.026306 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-677c4ffcdf-n44s6"] Feb 14 04:33:11 crc kubenswrapper[4867]: I0214 04:33:11.026346 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-677c4ffcdf-n44s6"] Feb 14 04:33:11 crc kubenswrapper[4867]: I0214 04:33:11.917633 4867 generic.go:334] "Generic (PLEG): container finished" podID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerID="b01aeddd7a6627ea9d173d8005e626741ff32516f251c5c2ab496415a659b79e" exitCode=0 Feb 14 04:33:11 crc kubenswrapper[4867]: I0214 04:33:11.917888 4867 generic.go:334] "Generic (PLEG): container finished" podID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerID="7a99e95642b07c831a11e55d61a4998c2a981443de40d98a474d53cd803563e3" exitCode=2 Feb 14 04:33:11 crc kubenswrapper[4867]: I0214 04:33:11.917702 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53d13a71-03e0-46f0-9ca1-a868d38727f8","Type":"ContainerDied","Data":"b01aeddd7a6627ea9d173d8005e626741ff32516f251c5c2ab496415a659b79e"} Feb 14 04:33:11 crc kubenswrapper[4867]: I0214 04:33:11.917943 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53d13a71-03e0-46f0-9ca1-a868d38727f8","Type":"ContainerDied","Data":"7a99e95642b07c831a11e55d61a4998c2a981443de40d98a474d53cd803563e3"} Feb 14 04:33:12 crc kubenswrapper[4867]: I0214 04:33:12.932544 4867 generic.go:334] "Generic (PLEG): container finished" podID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerID="9acb43ba5b6f8734f595001c299ec6b770fd758556efb397ef7af5d7c1128490" exitCode=0 Feb 14 04:33:12 crc kubenswrapper[4867]: I0214 04:33:12.932617 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53d13a71-03e0-46f0-9ca1-a868d38727f8","Type":"ContainerDied","Data":"9acb43ba5b6f8734f595001c299ec6b770fd758556efb397ef7af5d7c1128490"} Feb 14 04:33:13 crc kubenswrapper[4867]: I0214 04:33:13.011274 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2ce3fe5-1f15-484b-a608-da9f03d714c9" path="/var/lib/kubelet/pods/a2ce3fe5-1f15-484b-a608-da9f03d714c9/volumes" Feb 14 04:33:14 crc kubenswrapper[4867]: I0214 04:33:14.959643 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:33:14 crc kubenswrapper[4867]: I0214 04:33:14.960348 4867 generic.go:334] "Generic (PLEG): container finished" podID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerID="c1a49b4b77bae4bd28c8b0c4b6a3607ac206e2b7cf6262bf828fee39c66f8743" exitCode=0 Feb 14 04:33:14 crc kubenswrapper[4867]: I0214 04:33:14.960399 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53d13a71-03e0-46f0-9ca1-a868d38727f8","Type":"ContainerDied","Data":"c1a49b4b77bae4bd28c8b0c4b6a3607ac206e2b7cf6262bf828fee39c66f8743"} Feb 14 04:33:14 crc kubenswrapper[4867]: I0214 04:33:14.960437 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"53d13a71-03e0-46f0-9ca1-a868d38727f8","Type":"ContainerDied","Data":"d7f626293a253c0f81c7bd94b01af430ab3e2653b40c33393d86f55218de6f1d"} Feb 14 04:33:14 crc kubenswrapper[4867]: I0214 04:33:14.960459 4867 scope.go:117] "RemoveContainer" containerID="b01aeddd7a6627ea9d173d8005e626741ff32516f251c5c2ab496415a659b79e" Feb 14 04:33:14 crc kubenswrapper[4867]: I0214 04:33:14.997772 4867 scope.go:117] "RemoveContainer" containerID="7a99e95642b07c831a11e55d61a4998c2a981443de40d98a474d53cd803563e3" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.019555 4867 scope.go:117] "RemoveContainer" containerID="9acb43ba5b6f8734f595001c299ec6b770fd758556efb397ef7af5d7c1128490" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.055239 4867 scope.go:117] "RemoveContainer" containerID="c1a49b4b77bae4bd28c8b0c4b6a3607ac206e2b7cf6262bf828fee39c66f8743" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.065566 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-config-data\") pod \"53d13a71-03e0-46f0-9ca1-a868d38727f8\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.065670 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53d13a71-03e0-46f0-9ca1-a868d38727f8-run-httpd\") pod \"53d13a71-03e0-46f0-9ca1-a868d38727f8\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.065871 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53d13a71-03e0-46f0-9ca1-a868d38727f8-log-httpd\") pod \"53d13a71-03e0-46f0-9ca1-a868d38727f8\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.065908 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-combined-ca-bundle\") pod \"53d13a71-03e0-46f0-9ca1-a868d38727f8\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.066035 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2x4ch\" (UniqueName: \"kubernetes.io/projected/53d13a71-03e0-46f0-9ca1-a868d38727f8-kube-api-access-2x4ch\") pod \"53d13a71-03e0-46f0-9ca1-a868d38727f8\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.066162 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-sg-core-conf-yaml\") pod \"53d13a71-03e0-46f0-9ca1-a868d38727f8\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.066265 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-scripts\") pod \"53d13a71-03e0-46f0-9ca1-a868d38727f8\" (UID: \"53d13a71-03e0-46f0-9ca1-a868d38727f8\") " Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.068320 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53d13a71-03e0-46f0-9ca1-a868d38727f8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "53d13a71-03e0-46f0-9ca1-a868d38727f8" (UID: "53d13a71-03e0-46f0-9ca1-a868d38727f8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.068752 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53d13a71-03e0-46f0-9ca1-a868d38727f8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "53d13a71-03e0-46f0-9ca1-a868d38727f8" (UID: "53d13a71-03e0-46f0-9ca1-a868d38727f8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.074224 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53d13a71-03e0-46f0-9ca1-a868d38727f8-kube-api-access-2x4ch" (OuterVolumeSpecName: "kube-api-access-2x4ch") pod "53d13a71-03e0-46f0-9ca1-a868d38727f8" (UID: "53d13a71-03e0-46f0-9ca1-a868d38727f8"). InnerVolumeSpecName "kube-api-access-2x4ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.074315 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-scripts" (OuterVolumeSpecName: "scripts") pod "53d13a71-03e0-46f0-9ca1-a868d38727f8" (UID: "53d13a71-03e0-46f0-9ca1-a868d38727f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.103115 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "53d13a71-03e0-46f0-9ca1-a868d38727f8" (UID: "53d13a71-03e0-46f0-9ca1-a868d38727f8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.112147 4867 scope.go:117] "RemoveContainer" containerID="b01aeddd7a6627ea9d173d8005e626741ff32516f251c5c2ab496415a659b79e" Feb 14 04:33:15 crc kubenswrapper[4867]: E0214 04:33:15.112911 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b01aeddd7a6627ea9d173d8005e626741ff32516f251c5c2ab496415a659b79e\": container with ID starting with b01aeddd7a6627ea9d173d8005e626741ff32516f251c5c2ab496415a659b79e not found: ID does not exist" containerID="b01aeddd7a6627ea9d173d8005e626741ff32516f251c5c2ab496415a659b79e" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.112960 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b01aeddd7a6627ea9d173d8005e626741ff32516f251c5c2ab496415a659b79e"} err="failed to get container status \"b01aeddd7a6627ea9d173d8005e626741ff32516f251c5c2ab496415a659b79e\": rpc error: code = NotFound desc = could not find container \"b01aeddd7a6627ea9d173d8005e626741ff32516f251c5c2ab496415a659b79e\": container with ID starting with b01aeddd7a6627ea9d173d8005e626741ff32516f251c5c2ab496415a659b79e not found: ID does not exist" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.112991 4867 scope.go:117] "RemoveContainer" containerID="7a99e95642b07c831a11e55d61a4998c2a981443de40d98a474d53cd803563e3" Feb 14 04:33:15 crc kubenswrapper[4867]: E0214 04:33:15.116673 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a99e95642b07c831a11e55d61a4998c2a981443de40d98a474d53cd803563e3\": container with ID starting with 7a99e95642b07c831a11e55d61a4998c2a981443de40d98a474d53cd803563e3 not found: ID does not exist" containerID="7a99e95642b07c831a11e55d61a4998c2a981443de40d98a474d53cd803563e3" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.116722 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a99e95642b07c831a11e55d61a4998c2a981443de40d98a474d53cd803563e3"} err="failed to get container status \"7a99e95642b07c831a11e55d61a4998c2a981443de40d98a474d53cd803563e3\": rpc error: code = NotFound desc = could not find container \"7a99e95642b07c831a11e55d61a4998c2a981443de40d98a474d53cd803563e3\": container with ID starting with 7a99e95642b07c831a11e55d61a4998c2a981443de40d98a474d53cd803563e3 not found: ID does not exist" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.116755 4867 scope.go:117] "RemoveContainer" containerID="9acb43ba5b6f8734f595001c299ec6b770fd758556efb397ef7af5d7c1128490" Feb 14 04:33:15 crc kubenswrapper[4867]: E0214 04:33:15.117284 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9acb43ba5b6f8734f595001c299ec6b770fd758556efb397ef7af5d7c1128490\": container with ID starting with 9acb43ba5b6f8734f595001c299ec6b770fd758556efb397ef7af5d7c1128490 not found: ID does not exist" containerID="9acb43ba5b6f8734f595001c299ec6b770fd758556efb397ef7af5d7c1128490" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.117334 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9acb43ba5b6f8734f595001c299ec6b770fd758556efb397ef7af5d7c1128490"} err="failed to get container status \"9acb43ba5b6f8734f595001c299ec6b770fd758556efb397ef7af5d7c1128490\": rpc error: code = NotFound desc = could not find container \"9acb43ba5b6f8734f595001c299ec6b770fd758556efb397ef7af5d7c1128490\": container with ID starting with 9acb43ba5b6f8734f595001c299ec6b770fd758556efb397ef7af5d7c1128490 not found: ID does not exist" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.117357 4867 scope.go:117] "RemoveContainer" containerID="c1a49b4b77bae4bd28c8b0c4b6a3607ac206e2b7cf6262bf828fee39c66f8743" Feb 14 04:33:15 crc kubenswrapper[4867]: E0214 04:33:15.118089 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1a49b4b77bae4bd28c8b0c4b6a3607ac206e2b7cf6262bf828fee39c66f8743\": container with ID starting with c1a49b4b77bae4bd28c8b0c4b6a3607ac206e2b7cf6262bf828fee39c66f8743 not found: ID does not exist" containerID="c1a49b4b77bae4bd28c8b0c4b6a3607ac206e2b7cf6262bf828fee39c66f8743" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.118116 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1a49b4b77bae4bd28c8b0c4b6a3607ac206e2b7cf6262bf828fee39c66f8743"} err="failed to get container status \"c1a49b4b77bae4bd28c8b0c4b6a3607ac206e2b7cf6262bf828fee39c66f8743\": rpc error: code = NotFound desc = could not find container \"c1a49b4b77bae4bd28c8b0c4b6a3607ac206e2b7cf6262bf828fee39c66f8743\": container with ID starting with c1a49b4b77bae4bd28c8b0c4b6a3607ac206e2b7cf6262bf828fee39c66f8743 not found: ID does not exist" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.169233 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2x4ch\" (UniqueName: \"kubernetes.io/projected/53d13a71-03e0-46f0-9ca1-a868d38727f8-kube-api-access-2x4ch\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.169273 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.169284 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.169293 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53d13a71-03e0-46f0-9ca1-a868d38727f8-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.169300 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/53d13a71-03e0-46f0-9ca1-a868d38727f8-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.207707 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-config-data" (OuterVolumeSpecName: "config-data") pod "53d13a71-03e0-46f0-9ca1-a868d38727f8" (UID: "53d13a71-03e0-46f0-9ca1-a868d38727f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.222693 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53d13a71-03e0-46f0-9ca1-a868d38727f8" (UID: "53d13a71-03e0-46f0-9ca1-a868d38727f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.272247 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.272295 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53d13a71-03e0-46f0-9ca1-a868d38727f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:15 crc kubenswrapper[4867]: I0214 04:33:15.972036 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.011522 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.023256 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.044709 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:33:16 crc kubenswrapper[4867]: E0214 04:33:16.045207 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e650fa8-a893-47e0-a5d5-0df60430ea9e" containerName="heat-cfnapi" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045228 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e650fa8-a893-47e0-a5d5-0df60430ea9e" containerName="heat-cfnapi" Feb 14 04:33:16 crc kubenswrapper[4867]: E0214 04:33:16.045238 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e650fa8-a893-47e0-a5d5-0df60430ea9e" containerName="heat-cfnapi" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045245 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e650fa8-a893-47e0-a5d5-0df60430ea9e" containerName="heat-cfnapi" Feb 14 04:33:16 crc kubenswrapper[4867]: E0214 04:33:16.045269 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerName="proxy-httpd" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045276 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerName="proxy-httpd" Feb 14 04:33:16 crc kubenswrapper[4867]: E0214 04:33:16.045290 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerName="ceilometer-notification-agent" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045296 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerName="ceilometer-notification-agent" Feb 14 04:33:16 crc kubenswrapper[4867]: E0214 04:33:16.045318 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf9a1d71-05e1-40ab-90a7-530d2083fe14" containerName="heat-api" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045326 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf9a1d71-05e1-40ab-90a7-530d2083fe14" containerName="heat-api" Feb 14 04:33:16 crc kubenswrapper[4867]: E0214 04:33:16.045337 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerName="ceilometer-central-agent" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045342 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerName="ceilometer-central-agent" Feb 14 04:33:16 crc kubenswrapper[4867]: E0214 04:33:16.045361 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerName="sg-core" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045368 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerName="sg-core" Feb 14 04:33:16 crc kubenswrapper[4867]: E0214 04:33:16.045383 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2ce3fe5-1f15-484b-a608-da9f03d714c9" containerName="heat-engine" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045389 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2ce3fe5-1f15-484b-a608-da9f03d714c9" containerName="heat-engine" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045600 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerName="ceilometer-notification-agent" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045617 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e650fa8-a893-47e0-a5d5-0df60430ea9e" containerName="heat-cfnapi" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045625 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e650fa8-a893-47e0-a5d5-0df60430ea9e" containerName="heat-cfnapi" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045633 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf9a1d71-05e1-40ab-90a7-530d2083fe14" containerName="heat-api" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045645 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf9a1d71-05e1-40ab-90a7-530d2083fe14" containerName="heat-api" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045659 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerName="proxy-httpd" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045680 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerName="ceilometer-central-agent" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045693 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" containerName="sg-core" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045707 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2ce3fe5-1f15-484b-a608-da9f03d714c9" containerName="heat-engine" Feb 14 04:33:16 crc kubenswrapper[4867]: E0214 04:33:16.045907 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf9a1d71-05e1-40ab-90a7-530d2083fe14" containerName="heat-api" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.045917 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf9a1d71-05e1-40ab-90a7-530d2083fe14" containerName="heat-api" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.048860 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.054137 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.055460 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.068895 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.101114 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.101303 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vcb2\" (UniqueName: \"kubernetes.io/projected/788d7241-b06e-48a1-972a-dcfc775b6284-kube-api-access-6vcb2\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.101475 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-config-data\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.101816 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-scripts\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.101979 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.102138 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/788d7241-b06e-48a1-972a-dcfc775b6284-log-httpd\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.102532 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/788d7241-b06e-48a1-972a-dcfc775b6284-run-httpd\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.205263 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.205357 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vcb2\" (UniqueName: \"kubernetes.io/projected/788d7241-b06e-48a1-972a-dcfc775b6284-kube-api-access-6vcb2\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.205382 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-config-data\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.205440 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-scripts\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.205483 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.205530 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/788d7241-b06e-48a1-972a-dcfc775b6284-log-httpd\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.205589 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/788d7241-b06e-48a1-972a-dcfc775b6284-run-httpd\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.206311 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/788d7241-b06e-48a1-972a-dcfc775b6284-run-httpd\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.206578 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/788d7241-b06e-48a1-972a-dcfc775b6284-log-httpd\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.210855 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.211134 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-scripts\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.222012 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vcb2\" (UniqueName: \"kubernetes.io/projected/788d7241-b06e-48a1-972a-dcfc775b6284-kube-api-access-6vcb2\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.227620 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.233585 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-config-data\") pod \"ceilometer-0\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.376477 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.888992 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:33:16 crc kubenswrapper[4867]: I0214 04:33:16.986984 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"788d7241-b06e-48a1-972a-dcfc775b6284","Type":"ContainerStarted","Data":"9521b41bed1d9a154f90b192faa3f2ee97914bb5360639cfc7050d90128f992d"} Feb 14 04:33:17 crc kubenswrapper[4867]: I0214 04:33:17.009276 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53d13a71-03e0-46f0-9ca1-a868d38727f8" path="/var/lib/kubelet/pods/53d13a71-03e0-46f0-9ca1-a868d38727f8/volumes" Feb 14 04:33:18 crc kubenswrapper[4867]: I0214 04:33:17.999721 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"788d7241-b06e-48a1-972a-dcfc775b6284","Type":"ContainerStarted","Data":"e18f2a83cfdf9446ab26d91a1eba8e1f68f59b3dbb23a4376180e9d0192d47fe"} Feb 14 04:33:19 crc kubenswrapper[4867]: I0214 04:33:19.028947 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"788d7241-b06e-48a1-972a-dcfc775b6284","Type":"ContainerStarted","Data":"a58f4833c7dc52ae560c46151d14057c1ee06f5e3f9e04d4d19f561f80e18b49"} Feb 14 04:33:20 crc kubenswrapper[4867]: I0214 04:33:20.051782 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"788d7241-b06e-48a1-972a-dcfc775b6284","Type":"ContainerStarted","Data":"08d1bd8680ee0a9caae4afea313d114c5670fd0ccfcb36da45fe6092ef6fbb57"} Feb 14 04:33:22 crc kubenswrapper[4867]: I0214 04:33:22.078481 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"788d7241-b06e-48a1-972a-dcfc775b6284","Type":"ContainerStarted","Data":"831f6143040c0fe5fcda131be628b4577c7e07490142b64520cd275be8a63db1"} Feb 14 04:33:22 crc kubenswrapper[4867]: I0214 04:33:22.079065 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 04:33:22 crc kubenswrapper[4867]: I0214 04:33:22.108347 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.00288849 podStartE2EDuration="6.108329528s" podCreationTimestamp="2026-02-14 04:33:16 +0000 UTC" firstStartedPulling="2026-02-14 04:33:16.913999873 +0000 UTC m=+1428.994937187" lastFinishedPulling="2026-02-14 04:33:21.019440921 +0000 UTC m=+1433.100378225" observedRunningTime="2026-02-14 04:33:22.105388129 +0000 UTC m=+1434.186325473" watchObservedRunningTime="2026-02-14 04:33:22.108329528 +0000 UTC m=+1434.189266842" Feb 14 04:33:27 crc kubenswrapper[4867]: I0214 04:33:27.135746 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vwg9c" event={"ID":"cd08e0e3-a41f-4b25-b71a-1c968410d52e","Type":"ContainerStarted","Data":"0f96994fd5725370a862ce87b1e8d08bfc4ff10235813b94e745a18d93f42f91"} Feb 14 04:33:27 crc kubenswrapper[4867]: I0214 04:33:27.160633 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-vwg9c" podStartSLOduration=2.188576522 podStartE2EDuration="38.160610297s" podCreationTimestamp="2026-02-14 04:32:49 +0000 UTC" firstStartedPulling="2026-02-14 04:32:50.442907176 +0000 UTC m=+1402.523844490" lastFinishedPulling="2026-02-14 04:33:26.414940951 +0000 UTC m=+1438.495878265" observedRunningTime="2026-02-14 04:33:27.151330517 +0000 UTC m=+1439.232267831" watchObservedRunningTime="2026-02-14 04:33:27.160610297 +0000 UTC m=+1439.241547611" Feb 14 04:33:34 crc kubenswrapper[4867]: I0214 04:33:34.450923 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:33:34 crc kubenswrapper[4867]: I0214 04:33:34.451889 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="ceilometer-central-agent" containerID="cri-o://e18f2a83cfdf9446ab26d91a1eba8e1f68f59b3dbb23a4376180e9d0192d47fe" gracePeriod=30 Feb 14 04:33:34 crc kubenswrapper[4867]: I0214 04:33:34.452016 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="ceilometer-notification-agent" containerID="cri-o://a58f4833c7dc52ae560c46151d14057c1ee06f5e3f9e04d4d19f561f80e18b49" gracePeriod=30 Feb 14 04:33:34 crc kubenswrapper[4867]: I0214 04:33:34.452012 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="sg-core" containerID="cri-o://08d1bd8680ee0a9caae4afea313d114c5670fd0ccfcb36da45fe6092ef6fbb57" gracePeriod=30 Feb 14 04:33:34 crc kubenswrapper[4867]: I0214 04:33:34.452048 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="proxy-httpd" containerID="cri-o://831f6143040c0fe5fcda131be628b4577c7e07490142b64520cd275be8a63db1" gracePeriod=30 Feb 14 04:33:34 crc kubenswrapper[4867]: I0214 04:33:34.473078 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.232:3000/\": EOF" Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.227775 4867 generic.go:334] "Generic (PLEG): container finished" podID="788d7241-b06e-48a1-972a-dcfc775b6284" containerID="831f6143040c0fe5fcda131be628b4577c7e07490142b64520cd275be8a63db1" exitCode=0 Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.227807 4867 generic.go:334] "Generic (PLEG): container finished" podID="788d7241-b06e-48a1-972a-dcfc775b6284" containerID="08d1bd8680ee0a9caae4afea313d114c5670fd0ccfcb36da45fe6092ef6fbb57" exitCode=2 Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.227816 4867 generic.go:334] "Generic (PLEG): container finished" podID="788d7241-b06e-48a1-972a-dcfc775b6284" containerID="e18f2a83cfdf9446ab26d91a1eba8e1f68f59b3dbb23a4376180e9d0192d47fe" exitCode=0 Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.227844 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"788d7241-b06e-48a1-972a-dcfc775b6284","Type":"ContainerDied","Data":"831f6143040c0fe5fcda131be628b4577c7e07490142b64520cd275be8a63db1"} Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.227898 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"788d7241-b06e-48a1-972a-dcfc775b6284","Type":"ContainerDied","Data":"08d1bd8680ee0a9caae4afea313d114c5670fd0ccfcb36da45fe6092ef6fbb57"} Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.227911 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"788d7241-b06e-48a1-972a-dcfc775b6284","Type":"ContainerDied","Data":"e18f2a83cfdf9446ab26d91a1eba8e1f68f59b3dbb23a4376180e9d0192d47fe"} Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.786258 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.838262 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/788d7241-b06e-48a1-972a-dcfc775b6284-run-httpd\") pod \"788d7241-b06e-48a1-972a-dcfc775b6284\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.838377 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vcb2\" (UniqueName: \"kubernetes.io/projected/788d7241-b06e-48a1-972a-dcfc775b6284-kube-api-access-6vcb2\") pod \"788d7241-b06e-48a1-972a-dcfc775b6284\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.838490 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-config-data\") pod \"788d7241-b06e-48a1-972a-dcfc775b6284\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.838568 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-sg-core-conf-yaml\") pod \"788d7241-b06e-48a1-972a-dcfc775b6284\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.838656 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-combined-ca-bundle\") pod \"788d7241-b06e-48a1-972a-dcfc775b6284\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.838803 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-scripts\") pod \"788d7241-b06e-48a1-972a-dcfc775b6284\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.838853 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/788d7241-b06e-48a1-972a-dcfc775b6284-log-httpd\") pod \"788d7241-b06e-48a1-972a-dcfc775b6284\" (UID: \"788d7241-b06e-48a1-972a-dcfc775b6284\") " Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.838930 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/788d7241-b06e-48a1-972a-dcfc775b6284-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "788d7241-b06e-48a1-972a-dcfc775b6284" (UID: "788d7241-b06e-48a1-972a-dcfc775b6284"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.839660 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/788d7241-b06e-48a1-972a-dcfc775b6284-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.840868 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/788d7241-b06e-48a1-972a-dcfc775b6284-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "788d7241-b06e-48a1-972a-dcfc775b6284" (UID: "788d7241-b06e-48a1-972a-dcfc775b6284"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.853697 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-scripts" (OuterVolumeSpecName: "scripts") pod "788d7241-b06e-48a1-972a-dcfc775b6284" (UID: "788d7241-b06e-48a1-972a-dcfc775b6284"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.853734 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/788d7241-b06e-48a1-972a-dcfc775b6284-kube-api-access-6vcb2" (OuterVolumeSpecName: "kube-api-access-6vcb2") pod "788d7241-b06e-48a1-972a-dcfc775b6284" (UID: "788d7241-b06e-48a1-972a-dcfc775b6284"). InnerVolumeSpecName "kube-api-access-6vcb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.942921 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.943459 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/788d7241-b06e-48a1-972a-dcfc775b6284-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.943588 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vcb2\" (UniqueName: \"kubernetes.io/projected/788d7241-b06e-48a1-972a-dcfc775b6284-kube-api-access-6vcb2\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.954791 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "788d7241-b06e-48a1-972a-dcfc775b6284" (UID: "788d7241-b06e-48a1-972a-dcfc775b6284"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:33:35 crc kubenswrapper[4867]: I0214 04:33:35.973491 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-config-data" (OuterVolumeSpecName: "config-data") pod "788d7241-b06e-48a1-972a-dcfc775b6284" (UID: "788d7241-b06e-48a1-972a-dcfc775b6284"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.015955 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "788d7241-b06e-48a1-972a-dcfc775b6284" (UID: "788d7241-b06e-48a1-972a-dcfc775b6284"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.046143 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.046188 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.046199 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/788d7241-b06e-48a1-972a-dcfc775b6284-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.260432 4867 generic.go:334] "Generic (PLEG): container finished" podID="788d7241-b06e-48a1-972a-dcfc775b6284" containerID="a58f4833c7dc52ae560c46151d14057c1ee06f5e3f9e04d4d19f561f80e18b49" exitCode=0 Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.260532 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"788d7241-b06e-48a1-972a-dcfc775b6284","Type":"ContainerDied","Data":"a58f4833c7dc52ae560c46151d14057c1ee06f5e3f9e04d4d19f561f80e18b49"} Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.260567 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"788d7241-b06e-48a1-972a-dcfc775b6284","Type":"ContainerDied","Data":"9521b41bed1d9a154f90b192faa3f2ee97914bb5360639cfc7050d90128f992d"} Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.260586 4867 scope.go:117] "RemoveContainer" containerID="831f6143040c0fe5fcda131be628b4577c7e07490142b64520cd275be8a63db1" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.260755 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.302909 4867 scope.go:117] "RemoveContainer" containerID="08d1bd8680ee0a9caae4afea313d114c5670fd0ccfcb36da45fe6092ef6fbb57" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.311657 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.328517 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.338263 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:33:36 crc kubenswrapper[4867]: E0214 04:33:36.338775 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="ceilometer-central-agent" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.338794 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="ceilometer-central-agent" Feb 14 04:33:36 crc kubenswrapper[4867]: E0214 04:33:36.338823 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="sg-core" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.338829 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="sg-core" Feb 14 04:33:36 crc kubenswrapper[4867]: E0214 04:33:36.338842 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="proxy-httpd" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.338848 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="proxy-httpd" Feb 14 04:33:36 crc kubenswrapper[4867]: E0214 04:33:36.338868 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="ceilometer-notification-agent" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.338874 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="ceilometer-notification-agent" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.339254 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="ceilometer-notification-agent" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.339275 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="sg-core" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.339288 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="proxy-httpd" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.339305 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" containerName="ceilometer-central-agent" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.343764 4867 scope.go:117] "RemoveContainer" containerID="a58f4833c7dc52ae560c46151d14057c1ee06f5e3f9e04d4d19f561f80e18b49" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.345931 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.349315 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.349543 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.359976 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.391910 4867 scope.go:117] "RemoveContainer" containerID="e18f2a83cfdf9446ab26d91a1eba8e1f68f59b3dbb23a4376180e9d0192d47fe" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.421964 4867 scope.go:117] "RemoveContainer" containerID="831f6143040c0fe5fcda131be628b4577c7e07490142b64520cd275be8a63db1" Feb 14 04:33:36 crc kubenswrapper[4867]: E0214 04:33:36.431761 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"831f6143040c0fe5fcda131be628b4577c7e07490142b64520cd275be8a63db1\": container with ID starting with 831f6143040c0fe5fcda131be628b4577c7e07490142b64520cd275be8a63db1 not found: ID does not exist" containerID="831f6143040c0fe5fcda131be628b4577c7e07490142b64520cd275be8a63db1" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.431843 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"831f6143040c0fe5fcda131be628b4577c7e07490142b64520cd275be8a63db1"} err="failed to get container status \"831f6143040c0fe5fcda131be628b4577c7e07490142b64520cd275be8a63db1\": rpc error: code = NotFound desc = could not find container \"831f6143040c0fe5fcda131be628b4577c7e07490142b64520cd275be8a63db1\": container with ID starting with 831f6143040c0fe5fcda131be628b4577c7e07490142b64520cd275be8a63db1 not found: ID does not exist" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.431875 4867 scope.go:117] "RemoveContainer" containerID="08d1bd8680ee0a9caae4afea313d114c5670fd0ccfcb36da45fe6092ef6fbb57" Feb 14 04:33:36 crc kubenswrapper[4867]: E0214 04:33:36.432563 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08d1bd8680ee0a9caae4afea313d114c5670fd0ccfcb36da45fe6092ef6fbb57\": container with ID starting with 08d1bd8680ee0a9caae4afea313d114c5670fd0ccfcb36da45fe6092ef6fbb57 not found: ID does not exist" containerID="08d1bd8680ee0a9caae4afea313d114c5670fd0ccfcb36da45fe6092ef6fbb57" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.432608 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08d1bd8680ee0a9caae4afea313d114c5670fd0ccfcb36da45fe6092ef6fbb57"} err="failed to get container status \"08d1bd8680ee0a9caae4afea313d114c5670fd0ccfcb36da45fe6092ef6fbb57\": rpc error: code = NotFound desc = could not find container \"08d1bd8680ee0a9caae4afea313d114c5670fd0ccfcb36da45fe6092ef6fbb57\": container with ID starting with 08d1bd8680ee0a9caae4afea313d114c5670fd0ccfcb36da45fe6092ef6fbb57 not found: ID does not exist" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.432642 4867 scope.go:117] "RemoveContainer" containerID="a58f4833c7dc52ae560c46151d14057c1ee06f5e3f9e04d4d19f561f80e18b49" Feb 14 04:33:36 crc kubenswrapper[4867]: E0214 04:33:36.434689 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a58f4833c7dc52ae560c46151d14057c1ee06f5e3f9e04d4d19f561f80e18b49\": container with ID starting with a58f4833c7dc52ae560c46151d14057c1ee06f5e3f9e04d4d19f561f80e18b49 not found: ID does not exist" containerID="a58f4833c7dc52ae560c46151d14057c1ee06f5e3f9e04d4d19f561f80e18b49" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.434740 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a58f4833c7dc52ae560c46151d14057c1ee06f5e3f9e04d4d19f561f80e18b49"} err="failed to get container status \"a58f4833c7dc52ae560c46151d14057c1ee06f5e3f9e04d4d19f561f80e18b49\": rpc error: code = NotFound desc = could not find container \"a58f4833c7dc52ae560c46151d14057c1ee06f5e3f9e04d4d19f561f80e18b49\": container with ID starting with a58f4833c7dc52ae560c46151d14057c1ee06f5e3f9e04d4d19f561f80e18b49 not found: ID does not exist" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.434769 4867 scope.go:117] "RemoveContainer" containerID="e18f2a83cfdf9446ab26d91a1eba8e1f68f59b3dbb23a4376180e9d0192d47fe" Feb 14 04:33:36 crc kubenswrapper[4867]: E0214 04:33:36.435118 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e18f2a83cfdf9446ab26d91a1eba8e1f68f59b3dbb23a4376180e9d0192d47fe\": container with ID starting with e18f2a83cfdf9446ab26d91a1eba8e1f68f59b3dbb23a4376180e9d0192d47fe not found: ID does not exist" containerID="e18f2a83cfdf9446ab26d91a1eba8e1f68f59b3dbb23a4376180e9d0192d47fe" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.435215 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e18f2a83cfdf9446ab26d91a1eba8e1f68f59b3dbb23a4376180e9d0192d47fe"} err="failed to get container status \"e18f2a83cfdf9446ab26d91a1eba8e1f68f59b3dbb23a4376180e9d0192d47fe\": rpc error: code = NotFound desc = could not find container \"e18f2a83cfdf9446ab26d91a1eba8e1f68f59b3dbb23a4376180e9d0192d47fe\": container with ID starting with e18f2a83cfdf9446ab26d91a1eba8e1f68f59b3dbb23a4376180e9d0192d47fe not found: ID does not exist" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.470221 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-config-data\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.470273 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.470379 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.470546 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-scripts\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.470576 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/146fecda-f9b9-4c60-96a7-feb4120cda4c-log-httpd\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.470608 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmp66\" (UniqueName: \"kubernetes.io/projected/146fecda-f9b9-4c60-96a7-feb4120cda4c-kube-api-access-xmp66\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.470705 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/146fecda-f9b9-4c60-96a7-feb4120cda4c-run-httpd\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.572272 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-scripts\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.572440 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/146fecda-f9b9-4c60-96a7-feb4120cda4c-log-httpd\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.572542 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmp66\" (UniqueName: \"kubernetes.io/projected/146fecda-f9b9-4c60-96a7-feb4120cda4c-kube-api-access-xmp66\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.572686 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/146fecda-f9b9-4c60-96a7-feb4120cda4c-run-httpd\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.572779 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-config-data\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.572846 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.572972 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.573596 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/146fecda-f9b9-4c60-96a7-feb4120cda4c-log-httpd\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.577984 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/146fecda-f9b9-4c60-96a7-feb4120cda4c-run-httpd\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.579448 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-scripts\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.580273 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.581799 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-config-data\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.582019 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.595358 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmp66\" (UniqueName: \"kubernetes.io/projected/146fecda-f9b9-4c60-96a7-feb4120cda4c-kube-api-access-xmp66\") pod \"ceilometer-0\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " pod="openstack/ceilometer-0" Feb 14 04:33:36 crc kubenswrapper[4867]: I0214 04:33:36.662971 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:33:37 crc kubenswrapper[4867]: I0214 04:33:37.022405 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="788d7241-b06e-48a1-972a-dcfc775b6284" path="/var/lib/kubelet/pods/788d7241-b06e-48a1-972a-dcfc775b6284/volumes" Feb 14 04:33:37 crc kubenswrapper[4867]: I0214 04:33:37.231864 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:33:37 crc kubenswrapper[4867]: I0214 04:33:37.282560 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"146fecda-f9b9-4c60-96a7-feb4120cda4c","Type":"ContainerStarted","Data":"2a9b10b567b5808562253fe944271d1f75330bc923dcd36a8e5d5a2e2e2a94fb"} Feb 14 04:33:37 crc kubenswrapper[4867]: I0214 04:33:37.804021 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:33:38 crc kubenswrapper[4867]: I0214 04:33:38.293458 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"146fecda-f9b9-4c60-96a7-feb4120cda4c","Type":"ContainerStarted","Data":"384d5807f9dd88aadba8af524a64d3e00b94913efc05bfc5c451124ddaedb1d6"} Feb 14 04:33:39 crc kubenswrapper[4867]: I0214 04:33:39.304442 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"146fecda-f9b9-4c60-96a7-feb4120cda4c","Type":"ContainerStarted","Data":"cf3e71140044d5d04aa52ab9b12ad81933fbe11148f05a7ee12b3ff9ed5ecd0b"} Feb 14 04:33:40 crc kubenswrapper[4867]: I0214 04:33:40.316102 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"146fecda-f9b9-4c60-96a7-feb4120cda4c","Type":"ContainerStarted","Data":"36e2d712d4d8b9ed772106e7c47ea1eef658b8b8e9f298edfa74c23417b23cf7"} Feb 14 04:33:41 crc kubenswrapper[4867]: I0214 04:33:41.327971 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"146fecda-f9b9-4c60-96a7-feb4120cda4c","Type":"ContainerStarted","Data":"24e83f89f28d0ec5c2caa8639449270fad89c9f3a9ccd66267870d308ea41167"} Feb 14 04:33:41 crc kubenswrapper[4867]: I0214 04:33:41.328299 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="ceilometer-central-agent" containerID="cri-o://384d5807f9dd88aadba8af524a64d3e00b94913efc05bfc5c451124ddaedb1d6" gracePeriod=30 Feb 14 04:33:41 crc kubenswrapper[4867]: I0214 04:33:41.328653 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="proxy-httpd" containerID="cri-o://24e83f89f28d0ec5c2caa8639449270fad89c9f3a9ccd66267870d308ea41167" gracePeriod=30 Feb 14 04:33:41 crc kubenswrapper[4867]: I0214 04:33:41.328769 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="sg-core" containerID="cri-o://36e2d712d4d8b9ed772106e7c47ea1eef658b8b8e9f298edfa74c23417b23cf7" gracePeriod=30 Feb 14 04:33:41 crc kubenswrapper[4867]: I0214 04:33:41.328879 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 04:33:41 crc kubenswrapper[4867]: I0214 04:33:41.328923 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="ceilometer-notification-agent" containerID="cri-o://cf3e71140044d5d04aa52ab9b12ad81933fbe11148f05a7ee12b3ff9ed5ecd0b" gracePeriod=30 Feb 14 04:33:41 crc kubenswrapper[4867]: I0214 04:33:41.338289 4867 generic.go:334] "Generic (PLEG): container finished" podID="cd08e0e3-a41f-4b25-b71a-1c968410d52e" containerID="0f96994fd5725370a862ce87b1e8d08bfc4ff10235813b94e745a18d93f42f91" exitCode=0 Feb 14 04:33:41 crc kubenswrapper[4867]: I0214 04:33:41.338341 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vwg9c" event={"ID":"cd08e0e3-a41f-4b25-b71a-1c968410d52e","Type":"ContainerDied","Data":"0f96994fd5725370a862ce87b1e8d08bfc4ff10235813b94e745a18d93f42f91"} Feb 14 04:33:41 crc kubenswrapper[4867]: I0214 04:33:41.360924 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.6911566900000001 podStartE2EDuration="5.360906371s" podCreationTimestamp="2026-02-14 04:33:36 +0000 UTC" firstStartedPulling="2026-02-14 04:33:37.208749218 +0000 UTC m=+1449.289686532" lastFinishedPulling="2026-02-14 04:33:40.878498899 +0000 UTC m=+1452.959436213" observedRunningTime="2026-02-14 04:33:41.357607733 +0000 UTC m=+1453.438545047" watchObservedRunningTime="2026-02-14 04:33:41.360906371 +0000 UTC m=+1453.441843685" Feb 14 04:33:42 crc kubenswrapper[4867]: I0214 04:33:42.351822 4867 generic.go:334] "Generic (PLEG): container finished" podID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerID="36e2d712d4d8b9ed772106e7c47ea1eef658b8b8e9f298edfa74c23417b23cf7" exitCode=2 Feb 14 04:33:42 crc kubenswrapper[4867]: I0214 04:33:42.352157 4867 generic.go:334] "Generic (PLEG): container finished" podID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerID="cf3e71140044d5d04aa52ab9b12ad81933fbe11148f05a7ee12b3ff9ed5ecd0b" exitCode=0 Feb 14 04:33:42 crc kubenswrapper[4867]: I0214 04:33:42.351909 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"146fecda-f9b9-4c60-96a7-feb4120cda4c","Type":"ContainerDied","Data":"36e2d712d4d8b9ed772106e7c47ea1eef658b8b8e9f298edfa74c23417b23cf7"} Feb 14 04:33:42 crc kubenswrapper[4867]: I0214 04:33:42.352213 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"146fecda-f9b9-4c60-96a7-feb4120cda4c","Type":"ContainerDied","Data":"cf3e71140044d5d04aa52ab9b12ad81933fbe11148f05a7ee12b3ff9ed5ecd0b"} Feb 14 04:33:42 crc kubenswrapper[4867]: I0214 04:33:42.792023 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vwg9c" Feb 14 04:33:42 crc kubenswrapper[4867]: I0214 04:33:42.945915 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-combined-ca-bundle\") pod \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\" (UID: \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\") " Feb 14 04:33:42 crc kubenswrapper[4867]: I0214 04:33:42.946009 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-config-data\") pod \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\" (UID: \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\") " Feb 14 04:33:42 crc kubenswrapper[4867]: I0214 04:33:42.946098 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbwhx\" (UniqueName: \"kubernetes.io/projected/cd08e0e3-a41f-4b25-b71a-1c968410d52e-kube-api-access-lbwhx\") pod \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\" (UID: \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\") " Feb 14 04:33:42 crc kubenswrapper[4867]: I0214 04:33:42.946243 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-scripts\") pod \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\" (UID: \"cd08e0e3-a41f-4b25-b71a-1c968410d52e\") " Feb 14 04:33:42 crc kubenswrapper[4867]: I0214 04:33:42.953745 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-scripts" (OuterVolumeSpecName: "scripts") pod "cd08e0e3-a41f-4b25-b71a-1c968410d52e" (UID: "cd08e0e3-a41f-4b25-b71a-1c968410d52e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:33:42 crc kubenswrapper[4867]: I0214 04:33:42.959436 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd08e0e3-a41f-4b25-b71a-1c968410d52e-kube-api-access-lbwhx" (OuterVolumeSpecName: "kube-api-access-lbwhx") pod "cd08e0e3-a41f-4b25-b71a-1c968410d52e" (UID: "cd08e0e3-a41f-4b25-b71a-1c968410d52e"). InnerVolumeSpecName "kube-api-access-lbwhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:33:42 crc kubenswrapper[4867]: I0214 04:33:42.977936 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd08e0e3-a41f-4b25-b71a-1c968410d52e" (UID: "cd08e0e3-a41f-4b25-b71a-1c968410d52e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:33:42 crc kubenswrapper[4867]: I0214 04:33:42.995582 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-config-data" (OuterVolumeSpecName: "config-data") pod "cd08e0e3-a41f-4b25-b71a-1c968410d52e" (UID: "cd08e0e3-a41f-4b25-b71a-1c968410d52e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.049938 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.049988 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.050002 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbwhx\" (UniqueName: \"kubernetes.io/projected/cd08e0e3-a41f-4b25-b71a-1c968410d52e-kube-api-access-lbwhx\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.050018 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd08e0e3-a41f-4b25-b71a-1c968410d52e-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.364498 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-vwg9c" event={"ID":"cd08e0e3-a41f-4b25-b71a-1c968410d52e","Type":"ContainerDied","Data":"bd096683847f90cf05e85285ccd82cb246a3d9366805a56c5de6b41e0584b142"} Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.364887 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd096683847f90cf05e85285ccd82cb246a3d9366805a56c5de6b41e0584b142" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.364599 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-vwg9c" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.544351 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 14 04:33:43 crc kubenswrapper[4867]: E0214 04:33:43.544929 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd08e0e3-a41f-4b25-b71a-1c968410d52e" containerName="nova-cell0-conductor-db-sync" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.544948 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd08e0e3-a41f-4b25-b71a-1c968410d52e" containerName="nova-cell0-conductor-db-sync" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.545167 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd08e0e3-a41f-4b25-b71a-1c968410d52e" containerName="nova-cell0-conductor-db-sync" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.546067 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.548597 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-fspzg" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.550441 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.558732 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.662320 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdfa169f-f57f-4d9c-bef3-529878be941b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"fdfa169f-f57f-4d9c-bef3-529878be941b\") " pod="openstack/nova-cell0-conductor-0" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.662422 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdfa169f-f57f-4d9c-bef3-529878be941b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"fdfa169f-f57f-4d9c-bef3-529878be941b\") " pod="openstack/nova-cell0-conductor-0" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.662811 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x27g\" (UniqueName: \"kubernetes.io/projected/fdfa169f-f57f-4d9c-bef3-529878be941b-kube-api-access-9x27g\") pod \"nova-cell0-conductor-0\" (UID: \"fdfa169f-f57f-4d9c-bef3-529878be941b\") " pod="openstack/nova-cell0-conductor-0" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.765695 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdfa169f-f57f-4d9c-bef3-529878be941b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"fdfa169f-f57f-4d9c-bef3-529878be941b\") " pod="openstack/nova-cell0-conductor-0" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.766774 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdfa169f-f57f-4d9c-bef3-529878be941b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"fdfa169f-f57f-4d9c-bef3-529878be941b\") " pod="openstack/nova-cell0-conductor-0" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.767204 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x27g\" (UniqueName: \"kubernetes.io/projected/fdfa169f-f57f-4d9c-bef3-529878be941b-kube-api-access-9x27g\") pod \"nova-cell0-conductor-0\" (UID: \"fdfa169f-f57f-4d9c-bef3-529878be941b\") " pod="openstack/nova-cell0-conductor-0" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.771771 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdfa169f-f57f-4d9c-bef3-529878be941b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"fdfa169f-f57f-4d9c-bef3-529878be941b\") " pod="openstack/nova-cell0-conductor-0" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.773778 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdfa169f-f57f-4d9c-bef3-529878be941b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"fdfa169f-f57f-4d9c-bef3-529878be941b\") " pod="openstack/nova-cell0-conductor-0" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.801240 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x27g\" (UniqueName: \"kubernetes.io/projected/fdfa169f-f57f-4d9c-bef3-529878be941b-kube-api-access-9x27g\") pod \"nova-cell0-conductor-0\" (UID: \"fdfa169f-f57f-4d9c-bef3-529878be941b\") " pod="openstack/nova-cell0-conductor-0" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.862130 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.955941 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-4dwll"] Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.957841 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-4dwll" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.968548 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-42f0-account-create-update-vx5cp"] Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.970012 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-42f0-account-create-update-vx5cp" Feb 14 04:33:43 crc kubenswrapper[4867]: I0214 04:33:43.972097 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.007761 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-4dwll"] Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.026312 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-42f0-account-create-update-vx5cp"] Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.074676 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnnff\" (UniqueName: \"kubernetes.io/projected/4aa569b6-1ec2-48e8-99c2-f165e5ea9604-kube-api-access-xnnff\") pod \"aodh-42f0-account-create-update-vx5cp\" (UID: \"4aa569b6-1ec2-48e8-99c2-f165e5ea9604\") " pod="openstack/aodh-42f0-account-create-update-vx5cp" Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.074726 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq9pz\" (UniqueName: \"kubernetes.io/projected/486bfb80-5589-4e9e-84d3-10726a066702-kube-api-access-zq9pz\") pod \"aodh-db-create-4dwll\" (UID: \"486bfb80-5589-4e9e-84d3-10726a066702\") " pod="openstack/aodh-db-create-4dwll" Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.074774 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/486bfb80-5589-4e9e-84d3-10726a066702-operator-scripts\") pod \"aodh-db-create-4dwll\" (UID: \"486bfb80-5589-4e9e-84d3-10726a066702\") " pod="openstack/aodh-db-create-4dwll" Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.074910 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4aa569b6-1ec2-48e8-99c2-f165e5ea9604-operator-scripts\") pod \"aodh-42f0-account-create-update-vx5cp\" (UID: \"4aa569b6-1ec2-48e8-99c2-f165e5ea9604\") " pod="openstack/aodh-42f0-account-create-update-vx5cp" Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.176866 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnnff\" (UniqueName: \"kubernetes.io/projected/4aa569b6-1ec2-48e8-99c2-f165e5ea9604-kube-api-access-xnnff\") pod \"aodh-42f0-account-create-update-vx5cp\" (UID: \"4aa569b6-1ec2-48e8-99c2-f165e5ea9604\") " pod="openstack/aodh-42f0-account-create-update-vx5cp" Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.176915 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq9pz\" (UniqueName: \"kubernetes.io/projected/486bfb80-5589-4e9e-84d3-10726a066702-kube-api-access-zq9pz\") pod \"aodh-db-create-4dwll\" (UID: \"486bfb80-5589-4e9e-84d3-10726a066702\") " pod="openstack/aodh-db-create-4dwll" Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.176957 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/486bfb80-5589-4e9e-84d3-10726a066702-operator-scripts\") pod \"aodh-db-create-4dwll\" (UID: \"486bfb80-5589-4e9e-84d3-10726a066702\") " pod="openstack/aodh-db-create-4dwll" Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.177051 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4aa569b6-1ec2-48e8-99c2-f165e5ea9604-operator-scripts\") pod \"aodh-42f0-account-create-update-vx5cp\" (UID: \"4aa569b6-1ec2-48e8-99c2-f165e5ea9604\") " pod="openstack/aodh-42f0-account-create-update-vx5cp" Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.178577 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/486bfb80-5589-4e9e-84d3-10726a066702-operator-scripts\") pod \"aodh-db-create-4dwll\" (UID: \"486bfb80-5589-4e9e-84d3-10726a066702\") " pod="openstack/aodh-db-create-4dwll" Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.178708 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4aa569b6-1ec2-48e8-99c2-f165e5ea9604-operator-scripts\") pod \"aodh-42f0-account-create-update-vx5cp\" (UID: \"4aa569b6-1ec2-48e8-99c2-f165e5ea9604\") " pod="openstack/aodh-42f0-account-create-update-vx5cp" Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.197745 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnnff\" (UniqueName: \"kubernetes.io/projected/4aa569b6-1ec2-48e8-99c2-f165e5ea9604-kube-api-access-xnnff\") pod \"aodh-42f0-account-create-update-vx5cp\" (UID: \"4aa569b6-1ec2-48e8-99c2-f165e5ea9604\") " pod="openstack/aodh-42f0-account-create-update-vx5cp" Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.216212 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq9pz\" (UniqueName: \"kubernetes.io/projected/486bfb80-5589-4e9e-84d3-10726a066702-kube-api-access-zq9pz\") pod \"aodh-db-create-4dwll\" (UID: \"486bfb80-5589-4e9e-84d3-10726a066702\") " pod="openstack/aodh-db-create-4dwll" Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.335692 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-4dwll" Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.346122 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-42f0-account-create-update-vx5cp" Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.500003 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 14 04:33:44 crc kubenswrapper[4867]: W0214 04:33:44.943925 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod486bfb80_5589_4e9e_84d3_10726a066702.slice/crio-4f316ece368c5c11c43eaedc9965b6523c35a9abf1623c22f23d982d15d9a1e7 WatchSource:0}: Error finding container 4f316ece368c5c11c43eaedc9965b6523c35a9abf1623c22f23d982d15d9a1e7: Status 404 returned error can't find the container with id 4f316ece368c5c11c43eaedc9965b6523c35a9abf1623c22f23d982d15d9a1e7 Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.946164 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-42f0-account-create-update-vx5cp"] Feb 14 04:33:44 crc kubenswrapper[4867]: I0214 04:33:44.967105 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-4dwll"] Feb 14 04:33:45 crc kubenswrapper[4867]: I0214 04:33:45.394459 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"fdfa169f-f57f-4d9c-bef3-529878be941b","Type":"ContainerStarted","Data":"5583803dac28810d6916569bf5511e8697e9203a7832b557492770fa91b1d747"} Feb 14 04:33:45 crc kubenswrapper[4867]: I0214 04:33:45.394797 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"fdfa169f-f57f-4d9c-bef3-529878be941b","Type":"ContainerStarted","Data":"2cb0fed603959c18b96e191a8248a3082516ac2b75f1907d1852f250904be6e6"} Feb 14 04:33:45 crc kubenswrapper[4867]: I0214 04:33:45.400889 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-4dwll" event={"ID":"486bfb80-5589-4e9e-84d3-10726a066702","Type":"ContainerStarted","Data":"f354428129d549a2471d562380d7b2183b151280e2771b123ea6777b6dcf2c51"} Feb 14 04:33:45 crc kubenswrapper[4867]: I0214 04:33:45.400944 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-4dwll" event={"ID":"486bfb80-5589-4e9e-84d3-10726a066702","Type":"ContainerStarted","Data":"4f316ece368c5c11c43eaedc9965b6523c35a9abf1623c22f23d982d15d9a1e7"} Feb 14 04:33:45 crc kubenswrapper[4867]: I0214 04:33:45.403097 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-42f0-account-create-update-vx5cp" event={"ID":"4aa569b6-1ec2-48e8-99c2-f165e5ea9604","Type":"ContainerStarted","Data":"25d2bb0267b03452021a150ec90554f6e1f81995014c999f80f860ac88461b64"} Feb 14 04:33:45 crc kubenswrapper[4867]: I0214 04:33:45.403142 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-42f0-account-create-update-vx5cp" event={"ID":"4aa569b6-1ec2-48e8-99c2-f165e5ea9604","Type":"ContainerStarted","Data":"a9ad71ee663b264a38d85b7ace139092be1831588b6cb7e85dec1f224d42ae62"} Feb 14 04:33:45 crc kubenswrapper[4867]: I0214 04:33:45.448379 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.448353466 podStartE2EDuration="2.448353466s" podCreationTimestamp="2026-02-14 04:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:33:45.421653809 +0000 UTC m=+1457.502591133" watchObservedRunningTime="2026-02-14 04:33:45.448353466 +0000 UTC m=+1457.529290780" Feb 14 04:33:45 crc kubenswrapper[4867]: I0214 04:33:45.466212 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-4dwll" podStartSLOduration=2.466187765 podStartE2EDuration="2.466187765s" podCreationTimestamp="2026-02-14 04:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:33:45.445390346 +0000 UTC m=+1457.526327660" watchObservedRunningTime="2026-02-14 04:33:45.466187765 +0000 UTC m=+1457.547125089" Feb 14 04:33:45 crc kubenswrapper[4867]: I0214 04:33:45.513922 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-42f0-account-create-update-vx5cp" podStartSLOduration=2.513899077 podStartE2EDuration="2.513899077s" podCreationTimestamp="2026-02-14 04:33:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:33:45.459883676 +0000 UTC m=+1457.540820990" watchObservedRunningTime="2026-02-14 04:33:45.513899077 +0000 UTC m=+1457.594836391" Feb 14 04:33:46 crc kubenswrapper[4867]: I0214 04:33:46.416896 4867 generic.go:334] "Generic (PLEG): container finished" podID="486bfb80-5589-4e9e-84d3-10726a066702" containerID="f354428129d549a2471d562380d7b2183b151280e2771b123ea6777b6dcf2c51" exitCode=0 Feb 14 04:33:46 crc kubenswrapper[4867]: I0214 04:33:46.417177 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-4dwll" event={"ID":"486bfb80-5589-4e9e-84d3-10726a066702","Type":"ContainerDied","Data":"f354428129d549a2471d562380d7b2183b151280e2771b123ea6777b6dcf2c51"} Feb 14 04:33:46 crc kubenswrapper[4867]: I0214 04:33:46.419861 4867 generic.go:334] "Generic (PLEG): container finished" podID="4aa569b6-1ec2-48e8-99c2-f165e5ea9604" containerID="25d2bb0267b03452021a150ec90554f6e1f81995014c999f80f860ac88461b64" exitCode=0 Feb 14 04:33:46 crc kubenswrapper[4867]: I0214 04:33:46.419929 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-42f0-account-create-update-vx5cp" event={"ID":"4aa569b6-1ec2-48e8-99c2-f165e5ea9604","Type":"ContainerDied","Data":"25d2bb0267b03452021a150ec90554f6e1f81995014c999f80f860ac88461b64"} Feb 14 04:33:46 crc kubenswrapper[4867]: I0214 04:33:46.420074 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 14 04:33:47 crc kubenswrapper[4867]: I0214 04:33:47.432662 4867 generic.go:334] "Generic (PLEG): container finished" podID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerID="384d5807f9dd88aadba8af524a64d3e00b94913efc05bfc5c451124ddaedb1d6" exitCode=0 Feb 14 04:33:47 crc kubenswrapper[4867]: I0214 04:33:47.432724 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"146fecda-f9b9-4c60-96a7-feb4120cda4c","Type":"ContainerDied","Data":"384d5807f9dd88aadba8af524a64d3e00b94913efc05bfc5c451124ddaedb1d6"} Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.005585 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-4dwll" Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.027596 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-42f0-account-create-update-vx5cp" Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.185191 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4aa569b6-1ec2-48e8-99c2-f165e5ea9604-operator-scripts\") pod \"4aa569b6-1ec2-48e8-99c2-f165e5ea9604\" (UID: \"4aa569b6-1ec2-48e8-99c2-f165e5ea9604\") " Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.185271 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq9pz\" (UniqueName: \"kubernetes.io/projected/486bfb80-5589-4e9e-84d3-10726a066702-kube-api-access-zq9pz\") pod \"486bfb80-5589-4e9e-84d3-10726a066702\" (UID: \"486bfb80-5589-4e9e-84d3-10726a066702\") " Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.185545 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnnff\" (UniqueName: \"kubernetes.io/projected/4aa569b6-1ec2-48e8-99c2-f165e5ea9604-kube-api-access-xnnff\") pod \"4aa569b6-1ec2-48e8-99c2-f165e5ea9604\" (UID: \"4aa569b6-1ec2-48e8-99c2-f165e5ea9604\") " Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.185705 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/486bfb80-5589-4e9e-84d3-10726a066702-operator-scripts\") pod \"486bfb80-5589-4e9e-84d3-10726a066702\" (UID: \"486bfb80-5589-4e9e-84d3-10726a066702\") " Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.187966 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/486bfb80-5589-4e9e-84d3-10726a066702-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "486bfb80-5589-4e9e-84d3-10726a066702" (UID: "486bfb80-5589-4e9e-84d3-10726a066702"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.193447 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4aa569b6-1ec2-48e8-99c2-f165e5ea9604-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4aa569b6-1ec2-48e8-99c2-f165e5ea9604" (UID: "4aa569b6-1ec2-48e8-99c2-f165e5ea9604"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.196995 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aa569b6-1ec2-48e8-99c2-f165e5ea9604-kube-api-access-xnnff" (OuterVolumeSpecName: "kube-api-access-xnnff") pod "4aa569b6-1ec2-48e8-99c2-f165e5ea9604" (UID: "4aa569b6-1ec2-48e8-99c2-f165e5ea9604"). InnerVolumeSpecName "kube-api-access-xnnff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.197102 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/486bfb80-5589-4e9e-84d3-10726a066702-kube-api-access-zq9pz" (OuterVolumeSpecName: "kube-api-access-zq9pz") pod "486bfb80-5589-4e9e-84d3-10726a066702" (UID: "486bfb80-5589-4e9e-84d3-10726a066702"). InnerVolumeSpecName "kube-api-access-zq9pz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.289873 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/486bfb80-5589-4e9e-84d3-10726a066702-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.289924 4867 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4aa569b6-1ec2-48e8-99c2-f165e5ea9604-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.289942 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zq9pz\" (UniqueName: \"kubernetes.io/projected/486bfb80-5589-4e9e-84d3-10726a066702-kube-api-access-zq9pz\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.289957 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnnff\" (UniqueName: \"kubernetes.io/projected/4aa569b6-1ec2-48e8-99c2-f165e5ea9604-kube-api-access-xnnff\") on node \"crc\" DevicePath \"\"" Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.461334 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-4dwll" event={"ID":"486bfb80-5589-4e9e-84d3-10726a066702","Type":"ContainerDied","Data":"4f316ece368c5c11c43eaedc9965b6523c35a9abf1623c22f23d982d15d9a1e7"} Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.461402 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f316ece368c5c11c43eaedc9965b6523c35a9abf1623c22f23d982d15d9a1e7" Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.461426 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-4dwll" Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.464294 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-42f0-account-create-update-vx5cp" event={"ID":"4aa569b6-1ec2-48e8-99c2-f165e5ea9604","Type":"ContainerDied","Data":"a9ad71ee663b264a38d85b7ace139092be1831588b6cb7e85dec1f224d42ae62"} Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.464350 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9ad71ee663b264a38d85b7ace139092be1831588b6cb7e85dec1f224d42ae62" Feb 14 04:33:48 crc kubenswrapper[4867]: I0214 04:33:48.464449 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-42f0-account-create-update-vx5cp" Feb 14 04:33:53 crc kubenswrapper[4867]: I0214 04:33:53.892198 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.430849 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-dnl28"] Feb 14 04:33:54 crc kubenswrapper[4867]: E0214 04:33:54.431447 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4aa569b6-1ec2-48e8-99c2-f165e5ea9604" containerName="mariadb-account-create-update" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.431472 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aa569b6-1ec2-48e8-99c2-f165e5ea9604" containerName="mariadb-account-create-update" Feb 14 04:33:54 crc kubenswrapper[4867]: E0214 04:33:54.431496 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="486bfb80-5589-4e9e-84d3-10726a066702" containerName="mariadb-database-create" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.431520 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="486bfb80-5589-4e9e-84d3-10726a066702" containerName="mariadb-database-create" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.431801 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4aa569b6-1ec2-48e8-99c2-f165e5ea9604" containerName="mariadb-account-create-update" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.431829 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="486bfb80-5589-4e9e-84d3-10726a066702" containerName="mariadb-database-create" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.432905 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-dnl28" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.435040 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.435186 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.435878 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.436378 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-bzvlt" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.451147 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-8pszd"] Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.452798 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8pszd" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.454229 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.456591 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.469523 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8pszd"] Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.482982 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-dnl28"] Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.538080 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzzc7\" (UniqueName: \"kubernetes.io/projected/9947f337-0734-4b4e-bc31-e68e6354ed74-kube-api-access-jzzc7\") pod \"nova-cell0-cell-mapping-8pszd\" (UID: \"9947f337-0734-4b4e-bc31-e68e6354ed74\") " pod="openstack/nova-cell0-cell-mapping-8pszd" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.538160 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8pszd\" (UID: \"9947f337-0734-4b4e-bc31-e68e6354ed74\") " pod="openstack/nova-cell0-cell-mapping-8pszd" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.538271 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-scripts\") pod \"nova-cell0-cell-mapping-8pszd\" (UID: \"9947f337-0734-4b4e-bc31-e68e6354ed74\") " pod="openstack/nova-cell0-cell-mapping-8pszd" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.538289 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-config-data\") pod \"aodh-db-sync-dnl28\" (UID: \"df373c99-9a99-4793-90ef-3ad7887e5e3e\") " pod="openstack/aodh-db-sync-dnl28" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.538346 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-config-data\") pod \"nova-cell0-cell-mapping-8pszd\" (UID: \"9947f337-0734-4b4e-bc31-e68e6354ed74\") " pod="openstack/nova-cell0-cell-mapping-8pszd" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.538425 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-scripts\") pod \"aodh-db-sync-dnl28\" (UID: \"df373c99-9a99-4793-90ef-3ad7887e5e3e\") " pod="openstack/aodh-db-sync-dnl28" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.538476 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2gkt\" (UniqueName: \"kubernetes.io/projected/df373c99-9a99-4793-90ef-3ad7887e5e3e-kube-api-access-q2gkt\") pod \"aodh-db-sync-dnl28\" (UID: \"df373c99-9a99-4793-90ef-3ad7887e5e3e\") " pod="openstack/aodh-db-sync-dnl28" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.538609 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-combined-ca-bundle\") pod \"aodh-db-sync-dnl28\" (UID: \"df373c99-9a99-4793-90ef-3ad7887e5e3e\") " pod="openstack/aodh-db-sync-dnl28" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.640607 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzzc7\" (UniqueName: \"kubernetes.io/projected/9947f337-0734-4b4e-bc31-e68e6354ed74-kube-api-access-jzzc7\") pod \"nova-cell0-cell-mapping-8pszd\" (UID: \"9947f337-0734-4b4e-bc31-e68e6354ed74\") " pod="openstack/nova-cell0-cell-mapping-8pszd" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.640673 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8pszd\" (UID: \"9947f337-0734-4b4e-bc31-e68e6354ed74\") " pod="openstack/nova-cell0-cell-mapping-8pszd" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.640721 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-scripts\") pod \"nova-cell0-cell-mapping-8pszd\" (UID: \"9947f337-0734-4b4e-bc31-e68e6354ed74\") " pod="openstack/nova-cell0-cell-mapping-8pszd" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.640745 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-config-data\") pod \"aodh-db-sync-dnl28\" (UID: \"df373c99-9a99-4793-90ef-3ad7887e5e3e\") " pod="openstack/aodh-db-sync-dnl28" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.640785 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-config-data\") pod \"nova-cell0-cell-mapping-8pszd\" (UID: \"9947f337-0734-4b4e-bc31-e68e6354ed74\") " pod="openstack/nova-cell0-cell-mapping-8pszd" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.640809 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-scripts\") pod \"aodh-db-sync-dnl28\" (UID: \"df373c99-9a99-4793-90ef-3ad7887e5e3e\") " pod="openstack/aodh-db-sync-dnl28" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.640842 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2gkt\" (UniqueName: \"kubernetes.io/projected/df373c99-9a99-4793-90ef-3ad7887e5e3e-kube-api-access-q2gkt\") pod \"aodh-db-sync-dnl28\" (UID: \"df373c99-9a99-4793-90ef-3ad7887e5e3e\") " pod="openstack/aodh-db-sync-dnl28" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.640900 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-combined-ca-bundle\") pod \"aodh-db-sync-dnl28\" (UID: \"df373c99-9a99-4793-90ef-3ad7887e5e3e\") " pod="openstack/aodh-db-sync-dnl28" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.656045 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-config-data\") pod \"aodh-db-sync-dnl28\" (UID: \"df373c99-9a99-4793-90ef-3ad7887e5e3e\") " pod="openstack/aodh-db-sync-dnl28" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.661272 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-combined-ca-bundle\") pod \"aodh-db-sync-dnl28\" (UID: \"df373c99-9a99-4793-90ef-3ad7887e5e3e\") " pod="openstack/aodh-db-sync-dnl28" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.669568 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-scripts\") pod \"nova-cell0-cell-mapping-8pszd\" (UID: \"9947f337-0734-4b4e-bc31-e68e6354ed74\") " pod="openstack/nova-cell0-cell-mapping-8pszd" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.669767 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-config-data\") pod \"nova-cell0-cell-mapping-8pszd\" (UID: \"9947f337-0734-4b4e-bc31-e68e6354ed74\") " pod="openstack/nova-cell0-cell-mapping-8pszd" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.674842 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-scripts\") pod \"aodh-db-sync-dnl28\" (UID: \"df373c99-9a99-4793-90ef-3ad7887e5e3e\") " pod="openstack/aodh-db-sync-dnl28" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.678274 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2gkt\" (UniqueName: \"kubernetes.io/projected/df373c99-9a99-4793-90ef-3ad7887e5e3e-kube-api-access-q2gkt\") pod \"aodh-db-sync-dnl28\" (UID: \"df373c99-9a99-4793-90ef-3ad7887e5e3e\") " pod="openstack/aodh-db-sync-dnl28" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.681025 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzzc7\" (UniqueName: \"kubernetes.io/projected/9947f337-0734-4b4e-bc31-e68e6354ed74-kube-api-access-jzzc7\") pod \"nova-cell0-cell-mapping-8pszd\" (UID: \"9947f337-0734-4b4e-bc31-e68e6354ed74\") " pod="openstack/nova-cell0-cell-mapping-8pszd" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.685230 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.687201 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.689759 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.700602 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8pszd\" (UID: \"9947f337-0734-4b4e-bc31-e68e6354ed74\") " pod="openstack/nova-cell0-cell-mapping-8pszd" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.758042 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.762867 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-dnl28" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.771463 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8pszd" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.850964 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-config-data\") pod \"nova-scheduler-0\" (UID: \"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf\") " pod="openstack/nova-scheduler-0" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.851049 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf\") " pod="openstack/nova-scheduler-0" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.851144 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv467\" (UniqueName: \"kubernetes.io/projected/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-kube-api-access-bv467\") pod \"nova-scheduler-0\" (UID: \"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf\") " pod="openstack/nova-scheduler-0" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.875622 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.877844 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.886094 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.900168 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.932600 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.934841 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.940951 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.953939 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-config-data\") pod \"nova-api-0\" (UID: \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\") " pod="openstack/nova-api-0" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.954017 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7vsl\" (UniqueName: \"kubernetes.io/projected/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-kube-api-access-d7vsl\") pod \"nova-api-0\" (UID: \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\") " pod="openstack/nova-api-0" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.954110 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv467\" (UniqueName: \"kubernetes.io/projected/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-kube-api-access-bv467\") pod \"nova-scheduler-0\" (UID: \"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf\") " pod="openstack/nova-scheduler-0" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.954154 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\") " pod="openstack/nova-api-0" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.954268 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-logs\") pod \"nova-api-0\" (UID: \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\") " pod="openstack/nova-api-0" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.954385 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-config-data\") pod \"nova-scheduler-0\" (UID: \"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf\") " pod="openstack/nova-scheduler-0" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.954461 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf\") " pod="openstack/nova-scheduler-0" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.955216 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.972231 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-config-data\") pod \"nova-scheduler-0\" (UID: \"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf\") " pod="openstack/nova-scheduler-0" Feb 14 04:33:54 crc kubenswrapper[4867]: I0214 04:33:54.984812 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf\") " pod="openstack/nova-scheduler-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.011259 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv467\" (UniqueName: \"kubernetes.io/projected/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-kube-api-access-bv467\") pod \"nova-scheduler-0\" (UID: \"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf\") " pod="openstack/nova-scheduler-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.107485 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-config-data\") pod \"nova-api-0\" (UID: \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\") " pod="openstack/nova-api-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.107998 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7vsl\" (UniqueName: \"kubernetes.io/projected/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-kube-api-access-d7vsl\") pod \"nova-api-0\" (UID: \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\") " pod="openstack/nova-api-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.108106 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxjqb\" (UniqueName: \"kubernetes.io/projected/f7eae771-49da-40b9-a538-9c7c49f61ce3-kube-api-access-lxjqb\") pod \"nova-metadata-0\" (UID: \"f7eae771-49da-40b9-a538-9c7c49f61ce3\") " pod="openstack/nova-metadata-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.108381 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\") " pod="openstack/nova-api-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.108424 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7eae771-49da-40b9-a538-9c7c49f61ce3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f7eae771-49da-40b9-a538-9c7c49f61ce3\") " pod="openstack/nova-metadata-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.109257 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-logs\") pod \"nova-api-0\" (UID: \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\") " pod="openstack/nova-api-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.109473 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7eae771-49da-40b9-a538-9c7c49f61ce3-config-data\") pod \"nova-metadata-0\" (UID: \"f7eae771-49da-40b9-a538-9c7c49f61ce3\") " pod="openstack/nova-metadata-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.109608 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7eae771-49da-40b9-a538-9c7c49f61ce3-logs\") pod \"nova-metadata-0\" (UID: \"f7eae771-49da-40b9-a538-9c7c49f61ce3\") " pod="openstack/nova-metadata-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.126384 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-logs\") pod \"nova-api-0\" (UID: \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\") " pod="openstack/nova-api-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.127013 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.126965 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\") " pod="openstack/nova-api-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.128195 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-config-data\") pod \"nova-api-0\" (UID: \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\") " pod="openstack/nova-api-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.154434 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7vsl\" (UniqueName: \"kubernetes.io/projected/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-kube-api-access-d7vsl\") pod \"nova-api-0\" (UID: \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\") " pod="openstack/nova-api-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.159773 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.159827 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.164524 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.203031 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.214385 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7eae771-49da-40b9-a538-9c7c49f61ce3-logs\") pod \"nova-metadata-0\" (UID: \"f7eae771-49da-40b9-a538-9c7c49f61ce3\") " pod="openstack/nova-metadata-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.215171 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7eae771-49da-40b9-a538-9c7c49f61ce3-logs\") pod \"nova-metadata-0\" (UID: \"f7eae771-49da-40b9-a538-9c7c49f61ce3\") " pod="openstack/nova-metadata-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.220014 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxjqb\" (UniqueName: \"kubernetes.io/projected/f7eae771-49da-40b9-a538-9c7c49f61ce3-kube-api-access-lxjqb\") pod \"nova-metadata-0\" (UID: \"f7eae771-49da-40b9-a538-9c7c49f61ce3\") " pod="openstack/nova-metadata-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.220185 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871276b6-7245-427a-8b55-29dfdfe3695b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"871276b6-7245-427a-8b55-29dfdfe3695b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.220340 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnxbd\" (UniqueName: \"kubernetes.io/projected/871276b6-7245-427a-8b55-29dfdfe3695b-kube-api-access-dnxbd\") pod \"nova-cell1-novncproxy-0\" (UID: \"871276b6-7245-427a-8b55-29dfdfe3695b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.220388 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7eae771-49da-40b9-a538-9c7c49f61ce3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f7eae771-49da-40b9-a538-9c7c49f61ce3\") " pod="openstack/nova-metadata-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.220439 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/871276b6-7245-427a-8b55-29dfdfe3695b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"871276b6-7245-427a-8b55-29dfdfe3695b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.220774 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7eae771-49da-40b9-a538-9c7c49f61ce3-config-data\") pod \"nova-metadata-0\" (UID: \"f7eae771-49da-40b9-a538-9c7c49f61ce3\") " pod="openstack/nova-metadata-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.230528 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.236250 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7eae771-49da-40b9-a538-9c7c49f61ce3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f7eae771-49da-40b9-a538-9c7c49f61ce3\") " pod="openstack/nova-metadata-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.260964 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7eae771-49da-40b9-a538-9c7c49f61ce3-config-data\") pod \"nova-metadata-0\" (UID: \"f7eae771-49da-40b9-a538-9c7c49f61ce3\") " pod="openstack/nova-metadata-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.276075 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxjqb\" (UniqueName: \"kubernetes.io/projected/f7eae771-49da-40b9-a538-9c7c49f61ce3-kube-api-access-lxjqb\") pod \"nova-metadata-0\" (UID: \"f7eae771-49da-40b9-a538-9c7c49f61ce3\") " pod="openstack/nova-metadata-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.297127 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-sf4cl"] Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.308366 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.324891 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-sf4cl"] Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.327847 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871276b6-7245-427a-8b55-29dfdfe3695b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"871276b6-7245-427a-8b55-29dfdfe3695b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.328038 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnxbd\" (UniqueName: \"kubernetes.io/projected/871276b6-7245-427a-8b55-29dfdfe3695b-kube-api-access-dnxbd\") pod \"nova-cell1-novncproxy-0\" (UID: \"871276b6-7245-427a-8b55-29dfdfe3695b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.331237 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/871276b6-7245-427a-8b55-29dfdfe3695b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"871276b6-7245-427a-8b55-29dfdfe3695b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.345912 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871276b6-7245-427a-8b55-29dfdfe3695b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"871276b6-7245-427a-8b55-29dfdfe3695b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.348356 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/871276b6-7245-427a-8b55-29dfdfe3695b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"871276b6-7245-427a-8b55-29dfdfe3695b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.383089 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnxbd\" (UniqueName: \"kubernetes.io/projected/871276b6-7245-427a-8b55-29dfdfe3695b-kube-api-access-dnxbd\") pod \"nova-cell1-novncproxy-0\" (UID: \"871276b6-7245-427a-8b55-29dfdfe3695b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.436765 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.436858 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-dns-svc\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.436897 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.436989 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-config\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.437113 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.441762 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpf7v\" (UniqueName: \"kubernetes.io/projected/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-kube-api-access-qpf7v\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.544516 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.544636 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpf7v\" (UniqueName: \"kubernetes.io/projected/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-kube-api-access-qpf7v\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.544679 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.544720 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-dns-svc\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.544748 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.544806 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-config\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.545851 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-config\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.546931 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.547464 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.547603 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.547641 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.547737 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-dns-svc\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.576572 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpf7v\" (UniqueName: \"kubernetes.io/projected/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-kube-api-access-qpf7v\") pod \"dnsmasq-dns-9b86998b5-sf4cl\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.598339 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.651716 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.850708 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8pszd"] Feb 14 04:33:55 crc kubenswrapper[4867]: I0214 04:33:55.863395 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-dnl28"] Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.212579 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.218457 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:33:56 crc kubenswrapper[4867]: W0214 04:33:56.258667 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a61bb72_374e_48c9_bfa2_bbcc3e7503e6.slice/crio-ad32db769940286e14cc05b0d71b14b2584188a0981a1af84763e8fb6a761500 WatchSource:0}: Error finding container ad32db769940286e14cc05b0d71b14b2584188a0981a1af84763e8fb6a761500: Status 404 returned error can't find the container with id ad32db769940286e14cc05b0d71b14b2584188a0981a1af84763e8fb6a761500 Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.452981 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.496793 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.602628 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"871276b6-7245-427a-8b55-29dfdfe3695b","Type":"ContainerStarted","Data":"7421ae1cc8f7150f6013e7337e1040d9ce9252e306ea9b4407c26605f30d6363"} Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.612127 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf","Type":"ContainerStarted","Data":"2f1ec16c434c7fe8c8b2e012785b630337a932a6d095d2d76aaa4e23a79c54fa"} Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.623265 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6","Type":"ContainerStarted","Data":"ad32db769940286e14cc05b0d71b14b2584188a0981a1af84763e8fb6a761500"} Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.626145 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f7eae771-49da-40b9-a538-9c7c49f61ce3","Type":"ContainerStarted","Data":"ddf55a66062b9b23ede7bc9c23d0eaea8956685a5ae97e614a4a208a0cb63dd4"} Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.640054 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-dnl28" event={"ID":"df373c99-9a99-4793-90ef-3ad7887e5e3e","Type":"ContainerStarted","Data":"1fd83dc61097e21fab2d831bb4e520d45961c33509d79aff1a7bb6b26c09cb8b"} Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.644733 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8pszd" event={"ID":"9947f337-0734-4b4e-bc31-e68e6354ed74","Type":"ContainerStarted","Data":"4c91a1eedf3612a0a64e4ffb88ac40594ed3abc921178439efbfe687de9b9c76"} Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.644781 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8pszd" event={"ID":"9947f337-0734-4b4e-bc31-e68e6354ed74","Type":"ContainerStarted","Data":"b83da7feac047b1c75ae9cbc66ea6dcb6125f8dff0301b8d2f1043fda57d7b84"} Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.694427 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-8pszd" podStartSLOduration=2.694403104 podStartE2EDuration="2.694403104s" podCreationTimestamp="2026-02-14 04:33:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:33:56.660197245 +0000 UTC m=+1468.741134559" watchObservedRunningTime="2026-02-14 04:33:56.694403104 +0000 UTC m=+1468.775340418" Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.757348 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-sf4cl"] Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.879774 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jw78d"] Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.881414 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jw78d" Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.886881 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.887050 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 14 04:33:56 crc kubenswrapper[4867]: I0214 04:33:56.907637 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jw78d"] Feb 14 04:33:57 crc kubenswrapper[4867]: I0214 04:33:57.025326 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jw78d\" (UID: \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\") " pod="openstack/nova-cell1-conductor-db-sync-jw78d" Feb 14 04:33:57 crc kubenswrapper[4867]: I0214 04:33:57.025908 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-config-data\") pod \"nova-cell1-conductor-db-sync-jw78d\" (UID: \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\") " pod="openstack/nova-cell1-conductor-db-sync-jw78d" Feb 14 04:33:57 crc kubenswrapper[4867]: I0214 04:33:57.025992 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-scripts\") pod \"nova-cell1-conductor-db-sync-jw78d\" (UID: \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\") " pod="openstack/nova-cell1-conductor-db-sync-jw78d" Feb 14 04:33:57 crc kubenswrapper[4867]: I0214 04:33:57.026080 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5whg\" (UniqueName: \"kubernetes.io/projected/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-kube-api-access-x5whg\") pod \"nova-cell1-conductor-db-sync-jw78d\" (UID: \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\") " pod="openstack/nova-cell1-conductor-db-sync-jw78d" Feb 14 04:33:57 crc kubenswrapper[4867]: I0214 04:33:57.131230 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jw78d\" (UID: \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\") " pod="openstack/nova-cell1-conductor-db-sync-jw78d" Feb 14 04:33:57 crc kubenswrapper[4867]: I0214 04:33:57.131350 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-config-data\") pod \"nova-cell1-conductor-db-sync-jw78d\" (UID: \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\") " pod="openstack/nova-cell1-conductor-db-sync-jw78d" Feb 14 04:33:57 crc kubenswrapper[4867]: I0214 04:33:57.131447 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-scripts\") pod \"nova-cell1-conductor-db-sync-jw78d\" (UID: \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\") " pod="openstack/nova-cell1-conductor-db-sync-jw78d" Feb 14 04:33:57 crc kubenswrapper[4867]: I0214 04:33:57.131535 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5whg\" (UniqueName: \"kubernetes.io/projected/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-kube-api-access-x5whg\") pod \"nova-cell1-conductor-db-sync-jw78d\" (UID: \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\") " pod="openstack/nova-cell1-conductor-db-sync-jw78d" Feb 14 04:33:57 crc kubenswrapper[4867]: I0214 04:33:57.138296 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-scripts\") pod \"nova-cell1-conductor-db-sync-jw78d\" (UID: \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\") " pod="openstack/nova-cell1-conductor-db-sync-jw78d" Feb 14 04:33:57 crc kubenswrapper[4867]: I0214 04:33:57.153601 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-config-data\") pod \"nova-cell1-conductor-db-sync-jw78d\" (UID: \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\") " pod="openstack/nova-cell1-conductor-db-sync-jw78d" Feb 14 04:33:57 crc kubenswrapper[4867]: I0214 04:33:57.154108 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5whg\" (UniqueName: \"kubernetes.io/projected/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-kube-api-access-x5whg\") pod \"nova-cell1-conductor-db-sync-jw78d\" (UID: \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\") " pod="openstack/nova-cell1-conductor-db-sync-jw78d" Feb 14 04:33:57 crc kubenswrapper[4867]: I0214 04:33:57.154159 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-jw78d\" (UID: \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\") " pod="openstack/nova-cell1-conductor-db-sync-jw78d" Feb 14 04:33:57 crc kubenswrapper[4867]: I0214 04:33:57.300149 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jw78d" Feb 14 04:33:57 crc kubenswrapper[4867]: I0214 04:33:57.690865 4867 generic.go:334] "Generic (PLEG): container finished" podID="6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9" containerID="25f3fdaf8d189df27a82c7b6c2f5ffc72a3cc21b6fdff3aa5db60ff88eff4374" exitCode=0 Feb 14 04:33:57 crc kubenswrapper[4867]: I0214 04:33:57.694424 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" event={"ID":"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9","Type":"ContainerDied","Data":"25f3fdaf8d189df27a82c7b6c2f5ffc72a3cc21b6fdff3aa5db60ff88eff4374"} Feb 14 04:33:57 crc kubenswrapper[4867]: I0214 04:33:57.694478 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" event={"ID":"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9","Type":"ContainerStarted","Data":"d75507374634724c8a1ef310952a5ce339f06c748d3d87d74bf982c68a7ee156"} Feb 14 04:33:58 crc kubenswrapper[4867]: I0214 04:33:58.084583 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jw78d"] Feb 14 04:33:58 crc kubenswrapper[4867]: I0214 04:33:58.521135 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:33:58 crc kubenswrapper[4867]: I0214 04:33:58.563175 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 04:33:58 crc kubenswrapper[4867]: I0214 04:33:58.734664 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jw78d" event={"ID":"2bbf3a42-f012-4bed-a60e-1defcd0b1af9","Type":"ContainerStarted","Data":"9434b7a5d62d84c5fafd89a974659be60c5965c5fe3ab11c7ca5ecbded575989"} Feb 14 04:33:58 crc kubenswrapper[4867]: I0214 04:33:58.734716 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jw78d" event={"ID":"2bbf3a42-f012-4bed-a60e-1defcd0b1af9","Type":"ContainerStarted","Data":"2f582cbf6bdcb91733773e29bff48a780e188f584567e68dfb743d1673b021ed"} Feb 14 04:33:58 crc kubenswrapper[4867]: I0214 04:33:58.743616 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" event={"ID":"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9","Type":"ContainerStarted","Data":"34d8986cbe09c27161bbd156dfd8b33968031eafe3d3eb76f1f6a490b717eb6f"} Feb 14 04:33:58 crc kubenswrapper[4867]: I0214 04:33:58.745901 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:33:58 crc kubenswrapper[4867]: I0214 04:33:58.767980 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-jw78d" podStartSLOduration=2.767947928 podStartE2EDuration="2.767947928s" podCreationTimestamp="2026-02-14 04:33:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:33:58.756959513 +0000 UTC m=+1470.837896847" watchObservedRunningTime="2026-02-14 04:33:58.767947928 +0000 UTC m=+1470.848885242" Feb 14 04:33:58 crc kubenswrapper[4867]: I0214 04:33:58.794809 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" podStartSLOduration=3.794776249 podStartE2EDuration="3.794776249s" podCreationTimestamp="2026-02-14 04:33:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:33:58.78887438 +0000 UTC m=+1470.869811694" watchObservedRunningTime="2026-02-14 04:33:58.794776249 +0000 UTC m=+1470.875713573" Feb 14 04:34:05 crc kubenswrapper[4867]: I0214 04:34:05.653668 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:34:05 crc kubenswrapper[4867]: I0214 04:34:05.717412 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-ccbrl"] Feb 14 04:34:05 crc kubenswrapper[4867]: I0214 04:34:05.718724 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" podUID="7959a0fa-00bd-492c-9892-a8c8727549c6" containerName="dnsmasq-dns" containerID="cri-o://5a01ea22a86b95bd3d047ecc780ee7786ac3f26352c9a5ce1e038cc9e891bc74" gracePeriod=10 Feb 14 04:34:05 crc kubenswrapper[4867]: I0214 04:34:05.894574 4867 generic.go:334] "Generic (PLEG): container finished" podID="7959a0fa-00bd-492c-9892-a8c8727549c6" containerID="5a01ea22a86b95bd3d047ecc780ee7786ac3f26352c9a5ce1e038cc9e891bc74" exitCode=0 Feb 14 04:34:05 crc kubenswrapper[4867]: I0214 04:34:05.894740 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" event={"ID":"7959a0fa-00bd-492c-9892-a8c8727549c6","Type":"ContainerDied","Data":"5a01ea22a86b95bd3d047ecc780ee7786ac3f26352c9a5ce1e038cc9e891bc74"} Feb 14 04:34:05 crc kubenswrapper[4867]: I0214 04:34:05.901886 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f7eae771-49da-40b9-a538-9c7c49f61ce3","Type":"ContainerStarted","Data":"e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded"} Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.530286 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.657983 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-ovsdbserver-sb\") pod \"7959a0fa-00bd-492c-9892-a8c8727549c6\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.658997 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-dns-swift-storage-0\") pod \"7959a0fa-00bd-492c-9892-a8c8727549c6\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.659084 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-dns-svc\") pod \"7959a0fa-00bd-492c-9892-a8c8727549c6\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.659231 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-ovsdbserver-nb\") pod \"7959a0fa-00bd-492c-9892-a8c8727549c6\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.659325 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bm4h\" (UniqueName: \"kubernetes.io/projected/7959a0fa-00bd-492c-9892-a8c8727549c6-kube-api-access-5bm4h\") pod \"7959a0fa-00bd-492c-9892-a8c8727549c6\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.659395 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-config\") pod \"7959a0fa-00bd-492c-9892-a8c8727549c6\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.679902 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7959a0fa-00bd-492c-9892-a8c8727549c6-kube-api-access-5bm4h" (OuterVolumeSpecName: "kube-api-access-5bm4h") pod "7959a0fa-00bd-492c-9892-a8c8727549c6" (UID: "7959a0fa-00bd-492c-9892-a8c8727549c6"). InnerVolumeSpecName "kube-api-access-5bm4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.709438 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.763837 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bm4h\" (UniqueName: \"kubernetes.io/projected/7959a0fa-00bd-492c-9892-a8c8727549c6-kube-api-access-5bm4h\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.773303 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-config" (OuterVolumeSpecName: "config") pod "7959a0fa-00bd-492c-9892-a8c8727549c6" (UID: "7959a0fa-00bd-492c-9892-a8c8727549c6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.780327 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7959a0fa-00bd-492c-9892-a8c8727549c6" (UID: "7959a0fa-00bd-492c-9892-a8c8727549c6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.784153 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7959a0fa-00bd-492c-9892-a8c8727549c6" (UID: "7959a0fa-00bd-492c-9892-a8c8727549c6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.864683 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7959a0fa-00bd-492c-9892-a8c8727549c6" (UID: "7959a0fa-00bd-492c-9892-a8c8727549c6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.865546 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-ovsdbserver-sb\") pod \"7959a0fa-00bd-492c-9892-a8c8727549c6\" (UID: \"7959a0fa-00bd-492c-9892-a8c8727549c6\") " Feb 14 04:34:06 crc kubenswrapper[4867]: W0214 04:34:06.865732 4867 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/7959a0fa-00bd-492c-9892-a8c8727549c6/volumes/kubernetes.io~configmap/ovsdbserver-sb Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.865753 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7959a0fa-00bd-492c-9892-a8c8727549c6" (UID: "7959a0fa-00bd-492c-9892-a8c8727549c6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.866576 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.866602 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.866612 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.866621 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.899453 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7959a0fa-00bd-492c-9892-a8c8727549c6" (UID: "7959a0fa-00bd-492c-9892-a8c8727549c6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.928423 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf","Type":"ContainerStarted","Data":"bc19b23b550c0ff93b93128b07ead353fc9290a4dbd1f4015fc48de629ff924f"} Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.935067 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6","Type":"ContainerStarted","Data":"9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567"} Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.935131 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6","Type":"ContainerStarted","Data":"da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004"} Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.949706 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f7eae771-49da-40b9-a538-9c7c49f61ce3","Type":"ContainerStarted","Data":"f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8"} Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.950022 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f7eae771-49da-40b9-a538-9c7c49f61ce3" containerName="nova-metadata-log" containerID="cri-o://e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded" gracePeriod=30 Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.950163 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f7eae771-49da-40b9-a538-9c7c49f61ce3" containerName="nova-metadata-metadata" containerID="cri-o://f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8" gracePeriod=30 Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.966854 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-dnl28" event={"ID":"df373c99-9a99-4793-90ef-3ad7887e5e3e","Type":"ContainerStarted","Data":"027f7b47ecf95746bb9733dbd606f94b7866eecb1f1ce8cb4d1598a367884200"} Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.969893 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7959a0fa-00bd-492c-9892-a8c8727549c6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.975652 4867 generic.go:334] "Generic (PLEG): container finished" podID="9947f337-0734-4b4e-bc31-e68e6354ed74" containerID="4c91a1eedf3612a0a64e4ffb88ac40594ed3abc921178439efbfe687de9b9c76" exitCode=0 Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.975722 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8pszd" event={"ID":"9947f337-0734-4b4e-bc31-e68e6354ed74","Type":"ContainerDied","Data":"4c91a1eedf3612a0a64e4ffb88ac40594ed3abc921178439efbfe687de9b9c76"} Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.987476 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.819298479 podStartE2EDuration="12.987452976s" podCreationTimestamp="2026-02-14 04:33:54 +0000 UTC" firstStartedPulling="2026-02-14 04:33:56.183016854 +0000 UTC m=+1468.263954168" lastFinishedPulling="2026-02-14 04:34:05.351171341 +0000 UTC m=+1477.432108665" observedRunningTime="2026-02-14 04:34:06.95409206 +0000 UTC m=+1479.035029374" watchObservedRunningTime="2026-02-14 04:34:06.987452976 +0000 UTC m=+1479.068390290" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.991452 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" event={"ID":"7959a0fa-00bd-492c-9892-a8c8727549c6","Type":"ContainerDied","Data":"509c3996717307d8c2159fc143b05ca2d8e25b377427985ddf997628e72d1f60"} Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.991520 4867 scope.go:117] "RemoveContainer" containerID="5a01ea22a86b95bd3d047ecc780ee7786ac3f26352c9a5ce1e038cc9e891bc74" Feb 14 04:34:06 crc kubenswrapper[4867]: I0214 04:34:06.991614 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-ccbrl" Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.000758 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="871276b6-7245-427a-8b55-29dfdfe3695b" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://7a29ef69a79abcd2999f8338c936a26c65e2128a8ad6b8ec14625ac6e941e0d9" gracePeriod=30 Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.014669 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.170466395 podStartE2EDuration="13.014645167s" podCreationTimestamp="2026-02-14 04:33:54 +0000 UTC" firstStartedPulling="2026-02-14 04:33:56.509611569 +0000 UTC m=+1468.590548883" lastFinishedPulling="2026-02-14 04:34:05.353790341 +0000 UTC m=+1477.434727655" observedRunningTime="2026-02-14 04:34:06.991359651 +0000 UTC m=+1479.072296965" watchObservedRunningTime="2026-02-14 04:34:07.014645167 +0000 UTC m=+1479.095582481" Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.041835 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.97678016 podStartE2EDuration="13.041804586s" podCreationTimestamp="2026-02-14 04:33:54 +0000 UTC" firstStartedPulling="2026-02-14 04:33:56.2707049 +0000 UTC m=+1468.351642214" lastFinishedPulling="2026-02-14 04:34:05.335729326 +0000 UTC m=+1477.416666640" observedRunningTime="2026-02-14 04:34:07.01437977 +0000 UTC m=+1479.095317084" watchObservedRunningTime="2026-02-14 04:34:07.041804586 +0000 UTC m=+1479.122741900" Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.041878 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"871276b6-7245-427a-8b55-29dfdfe3695b","Type":"ContainerStarted","Data":"7a29ef69a79abcd2999f8338c936a26c65e2128a8ad6b8ec14625ac6e941e0d9"} Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.044927 4867 scope.go:117] "RemoveContainer" containerID="82838cd053ec19d9355b8bed3bca33d40ca78328ccc5425dbe3475e660e9969c" Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.082324 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-dnl28" podStartSLOduration=3.639823476 podStartE2EDuration="13.082297494s" podCreationTimestamp="2026-02-14 04:33:54 +0000 UTC" firstStartedPulling="2026-02-14 04:33:55.924732234 +0000 UTC m=+1468.005669538" lastFinishedPulling="2026-02-14 04:34:05.367206242 +0000 UTC m=+1477.448143556" observedRunningTime="2026-02-14 04:34:07.034377527 +0000 UTC m=+1479.115314841" watchObservedRunningTime="2026-02-14 04:34:07.082297494 +0000 UTC m=+1479.163234808" Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.188902 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.299235885 podStartE2EDuration="13.188536809s" podCreationTimestamp="2026-02-14 04:33:54 +0000 UTC" firstStartedPulling="2026-02-14 04:33:56.467614951 +0000 UTC m=+1468.548552265" lastFinishedPulling="2026-02-14 04:34:05.356915875 +0000 UTC m=+1477.437853189" observedRunningTime="2026-02-14 04:34:07.117703796 +0000 UTC m=+1479.198641110" watchObservedRunningTime="2026-02-14 04:34:07.188536809 +0000 UTC m=+1479.269474123" Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.206293 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-ccbrl"] Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.223707 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-ccbrl"] Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.739964 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.817983 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxjqb\" (UniqueName: \"kubernetes.io/projected/f7eae771-49da-40b9-a538-9c7c49f61ce3-kube-api-access-lxjqb\") pod \"f7eae771-49da-40b9-a538-9c7c49f61ce3\" (UID: \"f7eae771-49da-40b9-a538-9c7c49f61ce3\") " Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.818188 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7eae771-49da-40b9-a538-9c7c49f61ce3-config-data\") pod \"f7eae771-49da-40b9-a538-9c7c49f61ce3\" (UID: \"f7eae771-49da-40b9-a538-9c7c49f61ce3\") " Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.818373 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7eae771-49da-40b9-a538-9c7c49f61ce3-combined-ca-bundle\") pod \"f7eae771-49da-40b9-a538-9c7c49f61ce3\" (UID: \"f7eae771-49da-40b9-a538-9c7c49f61ce3\") " Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.818567 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7eae771-49da-40b9-a538-9c7c49f61ce3-logs\") pod \"f7eae771-49da-40b9-a538-9c7c49f61ce3\" (UID: \"f7eae771-49da-40b9-a538-9c7c49f61ce3\") " Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.820040 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7eae771-49da-40b9-a538-9c7c49f61ce3-logs" (OuterVolumeSpecName: "logs") pod "f7eae771-49da-40b9-a538-9c7c49f61ce3" (UID: "f7eae771-49da-40b9-a538-9c7c49f61ce3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.835749 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7eae771-49da-40b9-a538-9c7c49f61ce3-kube-api-access-lxjqb" (OuterVolumeSpecName: "kube-api-access-lxjqb") pod "f7eae771-49da-40b9-a538-9c7c49f61ce3" (UID: "f7eae771-49da-40b9-a538-9c7c49f61ce3"). InnerVolumeSpecName "kube-api-access-lxjqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.912809 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7eae771-49da-40b9-a538-9c7c49f61ce3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7eae771-49da-40b9-a538-9c7c49f61ce3" (UID: "f7eae771-49da-40b9-a538-9c7c49f61ce3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.926772 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7eae771-49da-40b9-a538-9c7c49f61ce3-config-data" (OuterVolumeSpecName: "config-data") pod "f7eae771-49da-40b9-a538-9c7c49f61ce3" (UID: "f7eae771-49da-40b9-a538-9c7c49f61ce3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.928558 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxjqb\" (UniqueName: \"kubernetes.io/projected/f7eae771-49da-40b9-a538-9c7c49f61ce3-kube-api-access-lxjqb\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.928580 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7eae771-49da-40b9-a538-9c7c49f61ce3-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.928592 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7eae771-49da-40b9-a538-9c7c49f61ce3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:07 crc kubenswrapper[4867]: I0214 04:34:07.928601 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7eae771-49da-40b9-a538-9c7c49f61ce3-logs\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.014374 4867 generic.go:334] "Generic (PLEG): container finished" podID="f7eae771-49da-40b9-a538-9c7c49f61ce3" containerID="f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8" exitCode=0 Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.014407 4867 generic.go:334] "Generic (PLEG): container finished" podID="f7eae771-49da-40b9-a538-9c7c49f61ce3" containerID="e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded" exitCode=143 Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.014644 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.016775 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f7eae771-49da-40b9-a538-9c7c49f61ce3","Type":"ContainerDied","Data":"f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8"} Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.016846 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f7eae771-49da-40b9-a538-9c7c49f61ce3","Type":"ContainerDied","Data":"e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded"} Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.016860 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f7eae771-49da-40b9-a538-9c7c49f61ce3","Type":"ContainerDied","Data":"ddf55a66062b9b23ede7bc9c23d0eaea8956685a5ae97e614a4a208a0cb63dd4"} Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.016880 4867 scope.go:117] "RemoveContainer" containerID="f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.078623 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.082534 4867 scope.go:117] "RemoveContainer" containerID="e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.104265 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.122963 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:34:08 crc kubenswrapper[4867]: E0214 04:34:08.123893 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7eae771-49da-40b9-a538-9c7c49f61ce3" containerName="nova-metadata-metadata" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.123921 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7eae771-49da-40b9-a538-9c7c49f61ce3" containerName="nova-metadata-metadata" Feb 14 04:34:08 crc kubenswrapper[4867]: E0214 04:34:08.123953 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7eae771-49da-40b9-a538-9c7c49f61ce3" containerName="nova-metadata-log" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.123962 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7eae771-49da-40b9-a538-9c7c49f61ce3" containerName="nova-metadata-log" Feb 14 04:34:08 crc kubenswrapper[4867]: E0214 04:34:08.123994 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7959a0fa-00bd-492c-9892-a8c8727549c6" containerName="dnsmasq-dns" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.124006 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7959a0fa-00bd-492c-9892-a8c8727549c6" containerName="dnsmasq-dns" Feb 14 04:34:08 crc kubenswrapper[4867]: E0214 04:34:08.124031 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7959a0fa-00bd-492c-9892-a8c8727549c6" containerName="init" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.124039 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7959a0fa-00bd-492c-9892-a8c8727549c6" containerName="init" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.124324 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7eae771-49da-40b9-a538-9c7c49f61ce3" containerName="nova-metadata-metadata" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.124351 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7eae771-49da-40b9-a538-9c7c49f61ce3" containerName="nova-metadata-log" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.124381 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="7959a0fa-00bd-492c-9892-a8c8727549c6" containerName="dnsmasq-dns" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.126196 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.129576 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.134359 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.141215 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.205863 4867 scope.go:117] "RemoveContainer" containerID="f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8" Feb 14 04:34:08 crc kubenswrapper[4867]: E0214 04:34:08.209072 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8\": container with ID starting with f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8 not found: ID does not exist" containerID="f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.209185 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8"} err="failed to get container status \"f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8\": rpc error: code = NotFound desc = could not find container \"f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8\": container with ID starting with f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8 not found: ID does not exist" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.209216 4867 scope.go:117] "RemoveContainer" containerID="e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded" Feb 14 04:34:08 crc kubenswrapper[4867]: E0214 04:34:08.212024 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded\": container with ID starting with e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded not found: ID does not exist" containerID="e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.212053 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded"} err="failed to get container status \"e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded\": rpc error: code = NotFound desc = could not find container \"e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded\": container with ID starting with e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded not found: ID does not exist" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.212121 4867 scope.go:117] "RemoveContainer" containerID="f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.212353 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8"} err="failed to get container status \"f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8\": rpc error: code = NotFound desc = could not find container \"f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8\": container with ID starting with f026d965689327ff7eaf47896abc06424c95fc23b903539ea722d4d22e226ac8 not found: ID does not exist" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.212389 4867 scope.go:117] "RemoveContainer" containerID="e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.213225 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded"} err="failed to get container status \"e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded\": rpc error: code = NotFound desc = could not find container \"e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded\": container with ID starting with e09897e24480ca2b5eb387baeb8c83bcb5bcba1b2b26539d881c98ae54782ded not found: ID does not exist" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.236605 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.236978 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-config-data\") pod \"nova-metadata-0\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.237542 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.238006 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac83a182-1841-4e64-9b31-f20e32917613-logs\") pod \"nova-metadata-0\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.238437 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qcth\" (UniqueName: \"kubernetes.io/projected/ac83a182-1841-4e64-9b31-f20e32917613-kube-api-access-5qcth\") pod \"nova-metadata-0\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.342158 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qcth\" (UniqueName: \"kubernetes.io/projected/ac83a182-1841-4e64-9b31-f20e32917613-kube-api-access-5qcth\") pod \"nova-metadata-0\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.342967 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.343222 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-config-data\") pod \"nova-metadata-0\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.344352 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.344859 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac83a182-1841-4e64-9b31-f20e32917613-logs\") pod \"nova-metadata-0\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.345407 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac83a182-1841-4e64-9b31-f20e32917613-logs\") pod \"nova-metadata-0\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.347176 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-config-data\") pod \"nova-metadata-0\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.348558 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.350464 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.364106 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qcth\" (UniqueName: \"kubernetes.io/projected/ac83a182-1841-4e64-9b31-f20e32917613-kube-api-access-5qcth\") pod \"nova-metadata-0\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.450961 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.622893 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8pszd" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.753490 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzzc7\" (UniqueName: \"kubernetes.io/projected/9947f337-0734-4b4e-bc31-e68e6354ed74-kube-api-access-jzzc7\") pod \"9947f337-0734-4b4e-bc31-e68e6354ed74\" (UID: \"9947f337-0734-4b4e-bc31-e68e6354ed74\") " Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.753615 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-scripts\") pod \"9947f337-0734-4b4e-bc31-e68e6354ed74\" (UID: \"9947f337-0734-4b4e-bc31-e68e6354ed74\") " Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.753679 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-config-data\") pod \"9947f337-0734-4b4e-bc31-e68e6354ed74\" (UID: \"9947f337-0734-4b4e-bc31-e68e6354ed74\") " Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.754116 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-combined-ca-bundle\") pod \"9947f337-0734-4b4e-bc31-e68e6354ed74\" (UID: \"9947f337-0734-4b4e-bc31-e68e6354ed74\") " Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.763919 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-scripts" (OuterVolumeSpecName: "scripts") pod "9947f337-0734-4b4e-bc31-e68e6354ed74" (UID: "9947f337-0734-4b4e-bc31-e68e6354ed74"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.764084 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9947f337-0734-4b4e-bc31-e68e6354ed74-kube-api-access-jzzc7" (OuterVolumeSpecName: "kube-api-access-jzzc7") pod "9947f337-0734-4b4e-bc31-e68e6354ed74" (UID: "9947f337-0734-4b4e-bc31-e68e6354ed74"). InnerVolumeSpecName "kube-api-access-jzzc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.799674 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9947f337-0734-4b4e-bc31-e68e6354ed74" (UID: "9947f337-0734-4b4e-bc31-e68e6354ed74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.829770 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-config-data" (OuterVolumeSpecName: "config-data") pod "9947f337-0734-4b4e-bc31-e68e6354ed74" (UID: "9947f337-0734-4b4e-bc31-e68e6354ed74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.858353 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.858439 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzzc7\" (UniqueName: \"kubernetes.io/projected/9947f337-0734-4b4e-bc31-e68e6354ed74-kube-api-access-jzzc7\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.858464 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.858479 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9947f337-0734-4b4e-bc31-e68e6354ed74-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:08 crc kubenswrapper[4867]: I0214 04:34:08.983631 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:34:08 crc kubenswrapper[4867]: W0214 04:34:08.990523 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac83a182_1841_4e64_9b31_f20e32917613.slice/crio-17bd6501f265854a6cc4968c75a7bac955f83f1c413ca7aa976b818c26157d4b WatchSource:0}: Error finding container 17bd6501f265854a6cc4968c75a7bac955f83f1c413ca7aa976b818c26157d4b: Status 404 returned error can't find the container with id 17bd6501f265854a6cc4968c75a7bac955f83f1c413ca7aa976b818c26157d4b Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.010099 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7959a0fa-00bd-492c-9892-a8c8727549c6" path="/var/lib/kubelet/pods/7959a0fa-00bd-492c-9892-a8c8727549c6/volumes" Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.010786 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7eae771-49da-40b9-a538-9c7c49f61ce3" path="/var/lib/kubelet/pods/f7eae771-49da-40b9-a538-9c7c49f61ce3/volumes" Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.026590 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ac83a182-1841-4e64-9b31-f20e32917613","Type":"ContainerStarted","Data":"17bd6501f265854a6cc4968c75a7bac955f83f1c413ca7aa976b818c26157d4b"} Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.031036 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8pszd" event={"ID":"9947f337-0734-4b4e-bc31-e68e6354ed74","Type":"ContainerDied","Data":"b83da7feac047b1c75ae9cbc66ea6dcb6125f8dff0301b8d2f1043fda57d7b84"} Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.031082 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b83da7feac047b1c75ae9cbc66ea6dcb6125f8dff0301b8d2f1043fda57d7b84" Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.031158 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8pszd" Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.033176 4867 generic.go:334] "Generic (PLEG): container finished" podID="2bbf3a42-f012-4bed-a60e-1defcd0b1af9" containerID="9434b7a5d62d84c5fafd89a974659be60c5965c5fe3ab11c7ca5ecbded575989" exitCode=0 Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.033220 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jw78d" event={"ID":"2bbf3a42-f012-4bed-a60e-1defcd0b1af9","Type":"ContainerDied","Data":"9434b7a5d62d84c5fafd89a974659be60c5965c5fe3ab11c7ca5ecbded575989"} Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.281447 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.281794 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8a61bb72-374e-48c9-bfa2-bbcc3e7503e6" containerName="nova-api-log" containerID="cri-o://da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004" gracePeriod=30 Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.282461 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8a61bb72-374e-48c9-bfa2-bbcc3e7503e6" containerName="nova-api-api" containerID="cri-o://9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567" gracePeriod=30 Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.314212 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.314873 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf" containerName="nova-scheduler-scheduler" containerID="cri-o://bc19b23b550c0ff93b93128b07ead353fc9290a4dbd1f4015fc48de629ff924f" gracePeriod=30 Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.330078 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.955968 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.998927 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7vsl\" (UniqueName: \"kubernetes.io/projected/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-kube-api-access-d7vsl\") pod \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\" (UID: \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\") " Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.999200 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-logs\") pod \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\" (UID: \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\") " Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.999273 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-config-data\") pod \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\" (UID: \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\") " Feb 14 04:34:09 crc kubenswrapper[4867]: I0214 04:34:09.999312 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-combined-ca-bundle\") pod \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\" (UID: \"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6\") " Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.001073 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-logs" (OuterVolumeSpecName: "logs") pod "8a61bb72-374e-48c9-bfa2-bbcc3e7503e6" (UID: "8a61bb72-374e-48c9-bfa2-bbcc3e7503e6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.009257 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-kube-api-access-d7vsl" (OuterVolumeSpecName: "kube-api-access-d7vsl") pod "8a61bb72-374e-48c9-bfa2-bbcc3e7503e6" (UID: "8a61bb72-374e-48c9-bfa2-bbcc3e7503e6"). InnerVolumeSpecName "kube-api-access-d7vsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.042238 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a61bb72-374e-48c9-bfa2-bbcc3e7503e6" (UID: "8a61bb72-374e-48c9-bfa2-bbcc3e7503e6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.046151 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ac83a182-1841-4e64-9b31-f20e32917613","Type":"ContainerStarted","Data":"c5f4d2ce383f399374bc58d1584dbdd0becb6b82315f169b3563b08eb3f414d1"} Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.046208 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ac83a182-1841-4e64-9b31-f20e32917613","Type":"ContainerStarted","Data":"e338dd6321b7cc373e6d70dc187a67843992c598fb81afefb40eee13511f4c40"} Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.046556 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ac83a182-1841-4e64-9b31-f20e32917613" containerName="nova-metadata-log" containerID="cri-o://e338dd6321b7cc373e6d70dc187a67843992c598fb81afefb40eee13511f4c40" gracePeriod=30 Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.046678 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ac83a182-1841-4e64-9b31-f20e32917613" containerName="nova-metadata-metadata" containerID="cri-o://c5f4d2ce383f399374bc58d1584dbdd0becb6b82315f169b3563b08eb3f414d1" gracePeriod=30 Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.049182 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.049325 4867 generic.go:334] "Generic (PLEG): container finished" podID="8a61bb72-374e-48c9-bfa2-bbcc3e7503e6" containerID="9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567" exitCode=0 Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.049352 4867 generic.go:334] "Generic (PLEG): container finished" podID="8a61bb72-374e-48c9-bfa2-bbcc3e7503e6" containerID="da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004" exitCode=143 Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.049421 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6","Type":"ContainerDied","Data":"9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567"} Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.049449 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6","Type":"ContainerDied","Data":"da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004"} Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.049458 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8a61bb72-374e-48c9-bfa2-bbcc3e7503e6","Type":"ContainerDied","Data":"ad32db769940286e14cc05b0d71b14b2584188a0981a1af84763e8fb6a761500"} Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.049474 4867 scope.go:117] "RemoveContainer" containerID="9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.053171 4867 generic.go:334] "Generic (PLEG): container finished" podID="df373c99-9a99-4793-90ef-3ad7887e5e3e" containerID="027f7b47ecf95746bb9733dbd606f94b7866eecb1f1ce8cb4d1598a367884200" exitCode=0 Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.053352 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-dnl28" event={"ID":"df373c99-9a99-4793-90ef-3ad7887e5e3e","Type":"ContainerDied","Data":"027f7b47ecf95746bb9733dbd606f94b7866eecb1f1ce8cb4d1598a367884200"} Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.054678 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-config-data" (OuterVolumeSpecName: "config-data") pod "8a61bb72-374e-48c9-bfa2-bbcc3e7503e6" (UID: "8a61bb72-374e-48c9-bfa2-bbcc3e7503e6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.094626 4867 scope.go:117] "RemoveContainer" containerID="da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.101995 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-logs\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.102023 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.102033 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.102046 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7vsl\" (UniqueName: \"kubernetes.io/projected/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6-kube-api-access-d7vsl\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.108856 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.108836324 podStartE2EDuration="2.108836324s" podCreationTimestamp="2026-02-14 04:34:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:34:10.062901969 +0000 UTC m=+1482.143839303" watchObservedRunningTime="2026-02-14 04:34:10.108836324 +0000 UTC m=+1482.189773638" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.128765 4867 scope.go:117] "RemoveContainer" containerID="9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567" Feb 14 04:34:10 crc kubenswrapper[4867]: E0214 04:34:10.133523 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567\": container with ID starting with 9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567 not found: ID does not exist" containerID="9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.133583 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567"} err="failed to get container status \"9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567\": rpc error: code = NotFound desc = could not find container \"9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567\": container with ID starting with 9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567 not found: ID does not exist" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.133613 4867 scope.go:117] "RemoveContainer" containerID="da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004" Feb 14 04:34:10 crc kubenswrapper[4867]: E0214 04:34:10.138331 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004\": container with ID starting with da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004 not found: ID does not exist" containerID="da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.138368 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004"} err="failed to get container status \"da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004\": rpc error: code = NotFound desc = could not find container \"da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004\": container with ID starting with da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004 not found: ID does not exist" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.138387 4867 scope.go:117] "RemoveContainer" containerID="9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.139816 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567"} err="failed to get container status \"9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567\": rpc error: code = NotFound desc = could not find container \"9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567\": container with ID starting with 9a5116fd54e01f05e9d364d64710a67006a81d421d560d17bd6b58d16f3ec567 not found: ID does not exist" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.139869 4867 scope.go:117] "RemoveContainer" containerID="da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.140233 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004"} err="failed to get container status \"da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004\": rpc error: code = NotFound desc = could not find container \"da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004\": container with ID starting with da2202426713d859b897485d737b21484e5ca9b5d7888e558f0886564ebc4004 not found: ID does not exist" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.162065 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.510552 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jw78d" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.542089 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.560153 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.597833 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 14 04:34:10 crc kubenswrapper[4867]: E0214 04:34:10.598470 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a61bb72-374e-48c9-bfa2-bbcc3e7503e6" containerName="nova-api-log" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.598493 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a61bb72-374e-48c9-bfa2-bbcc3e7503e6" containerName="nova-api-log" Feb 14 04:34:10 crc kubenswrapper[4867]: E0214 04:34:10.598557 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9947f337-0734-4b4e-bc31-e68e6354ed74" containerName="nova-manage" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.598568 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9947f337-0734-4b4e-bc31-e68e6354ed74" containerName="nova-manage" Feb 14 04:34:10 crc kubenswrapper[4867]: E0214 04:34:10.598581 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bbf3a42-f012-4bed-a60e-1defcd0b1af9" containerName="nova-cell1-conductor-db-sync" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.598588 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bbf3a42-f012-4bed-a60e-1defcd0b1af9" containerName="nova-cell1-conductor-db-sync" Feb 14 04:34:10 crc kubenswrapper[4867]: E0214 04:34:10.598600 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a61bb72-374e-48c9-bfa2-bbcc3e7503e6" containerName="nova-api-api" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.598608 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a61bb72-374e-48c9-bfa2-bbcc3e7503e6" containerName="nova-api-api" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.598814 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a61bb72-374e-48c9-bfa2-bbcc3e7503e6" containerName="nova-api-api" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.598833 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a61bb72-374e-48c9-bfa2-bbcc3e7503e6" containerName="nova-api-log" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.598847 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="9947f337-0734-4b4e-bc31-e68e6354ed74" containerName="nova-manage" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.598866 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bbf3a42-f012-4bed-a60e-1defcd0b1af9" containerName="nova-cell1-conductor-db-sync" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.605695 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.605843 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.609406 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.610264 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-config-data\") pod \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\" (UID: \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\") " Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.610586 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-combined-ca-bundle\") pod \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\" (UID: \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\") " Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.610684 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-scripts\") pod \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\" (UID: \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\") " Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.610846 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5whg\" (UniqueName: \"kubernetes.io/projected/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-kube-api-access-x5whg\") pod \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\" (UID: \"2bbf3a42-f012-4bed-a60e-1defcd0b1af9\") " Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.616683 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-kube-api-access-x5whg" (OuterVolumeSpecName: "kube-api-access-x5whg") pod "2bbf3a42-f012-4bed-a60e-1defcd0b1af9" (UID: "2bbf3a42-f012-4bed-a60e-1defcd0b1af9"). InnerVolumeSpecName "kube-api-access-x5whg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.660762 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-scripts" (OuterVolumeSpecName: "scripts") pod "2bbf3a42-f012-4bed-a60e-1defcd0b1af9" (UID: "2bbf3a42-f012-4bed-a60e-1defcd0b1af9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.664689 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.667701 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2bbf3a42-f012-4bed-a60e-1defcd0b1af9" (UID: "2bbf3a42-f012-4bed-a60e-1defcd0b1af9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.674457 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-config-data" (OuterVolumeSpecName: "config-data") pod "2bbf3a42-f012-4bed-a60e-1defcd0b1af9" (UID: "2bbf3a42-f012-4bed-a60e-1defcd0b1af9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.712821 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\") " pod="openstack/nova-api-0" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.712964 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-logs\") pod \"nova-api-0\" (UID: \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\") " pod="openstack/nova-api-0" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.713014 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-config-data\") pod \"nova-api-0\" (UID: \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\") " pod="openstack/nova-api-0" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.713157 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnlv8\" (UniqueName: \"kubernetes.io/projected/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-kube-api-access-bnlv8\") pod \"nova-api-0\" (UID: \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\") " pod="openstack/nova-api-0" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.713256 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.713276 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.713289 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5whg\" (UniqueName: \"kubernetes.io/projected/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-kube-api-access-x5whg\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.713302 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bbf3a42-f012-4bed-a60e-1defcd0b1af9-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.815284 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-logs\") pod \"nova-api-0\" (UID: \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\") " pod="openstack/nova-api-0" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.815757 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-config-data\") pod \"nova-api-0\" (UID: \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\") " pod="openstack/nova-api-0" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.815913 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-logs\") pod \"nova-api-0\" (UID: \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\") " pod="openstack/nova-api-0" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.815942 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnlv8\" (UniqueName: \"kubernetes.io/projected/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-kube-api-access-bnlv8\") pod \"nova-api-0\" (UID: \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\") " pod="openstack/nova-api-0" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.816030 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\") " pod="openstack/nova-api-0" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.823302 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\") " pod="openstack/nova-api-0" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.838373 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-config-data\") pod \"nova-api-0\" (UID: \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\") " pod="openstack/nova-api-0" Feb 14 04:34:10 crc kubenswrapper[4867]: I0214 04:34:10.840265 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnlv8\" (UniqueName: \"kubernetes.io/projected/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-kube-api-access-bnlv8\") pod \"nova-api-0\" (UID: \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\") " pod="openstack/nova-api-0" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.077580 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.149609 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a61bb72-374e-48c9-bfa2-bbcc3e7503e6" path="/var/lib/kubelet/pods/8a61bb72-374e-48c9-bfa2-bbcc3e7503e6/volumes" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.192362 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-jw78d" event={"ID":"2bbf3a42-f012-4bed-a60e-1defcd0b1af9","Type":"ContainerDied","Data":"2f582cbf6bdcb91733773e29bff48a780e188f584567e68dfb743d1673b021ed"} Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.192433 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f582cbf6bdcb91733773e29bff48a780e188f584567e68dfb743d1673b021ed" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.192564 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-jw78d" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.198184 4867 generic.go:334] "Generic (PLEG): container finished" podID="ac83a182-1841-4e64-9b31-f20e32917613" containerID="c5f4d2ce383f399374bc58d1584dbdd0becb6b82315f169b3563b08eb3f414d1" exitCode=0 Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.198221 4867 generic.go:334] "Generic (PLEG): container finished" podID="ac83a182-1841-4e64-9b31-f20e32917613" containerID="e338dd6321b7cc373e6d70dc187a67843992c598fb81afefb40eee13511f4c40" exitCode=143 Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.198234 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ac83a182-1841-4e64-9b31-f20e32917613","Type":"ContainerDied","Data":"c5f4d2ce383f399374bc58d1584dbdd0becb6b82315f169b3563b08eb3f414d1"} Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.198282 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ac83a182-1841-4e64-9b31-f20e32917613","Type":"ContainerDied","Data":"e338dd6321b7cc373e6d70dc187a67843992c598fb81afefb40eee13511f4c40"} Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.284584 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.286738 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.306911 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.373917 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.446581 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtzlh\" (UniqueName: \"kubernetes.io/projected/e367f188-2aa4-4374-a768-92b8e463e40d-kube-api-access-jtzlh\") pod \"nova-cell1-conductor-0\" (UID: \"e367f188-2aa4-4374-a768-92b8e463e40d\") " pod="openstack/nova-cell1-conductor-0" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.446950 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e367f188-2aa4-4374-a768-92b8e463e40d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e367f188-2aa4-4374-a768-92b8e463e40d\") " pod="openstack/nova-cell1-conductor-0" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.447038 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e367f188-2aa4-4374-a768-92b8e463e40d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e367f188-2aa4-4374-a768-92b8e463e40d\") " pod="openstack/nova-cell1-conductor-0" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.569178 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e367f188-2aa4-4374-a768-92b8e463e40d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e367f188-2aa4-4374-a768-92b8e463e40d\") " pod="openstack/nova-cell1-conductor-0" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.569550 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtzlh\" (UniqueName: \"kubernetes.io/projected/e367f188-2aa4-4374-a768-92b8e463e40d-kube-api-access-jtzlh\") pod \"nova-cell1-conductor-0\" (UID: \"e367f188-2aa4-4374-a768-92b8e463e40d\") " pod="openstack/nova-cell1-conductor-0" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.569719 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e367f188-2aa4-4374-a768-92b8e463e40d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e367f188-2aa4-4374-a768-92b8e463e40d\") " pod="openstack/nova-cell1-conductor-0" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.578734 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e367f188-2aa4-4374-a768-92b8e463e40d-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"e367f188-2aa4-4374-a768-92b8e463e40d\") " pod="openstack/nova-cell1-conductor-0" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.595178 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtzlh\" (UniqueName: \"kubernetes.io/projected/e367f188-2aa4-4374-a768-92b8e463e40d-kube-api-access-jtzlh\") pod \"nova-cell1-conductor-0\" (UID: \"e367f188-2aa4-4374-a768-92b8e463e40d\") " pod="openstack/nova-cell1-conductor-0" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.604116 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e367f188-2aa4-4374-a768-92b8e463e40d-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"e367f188-2aa4-4374-a768-92b8e463e40d\") " pod="openstack/nova-cell1-conductor-0" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.738739 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.744804 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.868625 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-dnl28" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.900543 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-combined-ca-bundle\") pod \"ac83a182-1841-4e64-9b31-f20e32917613\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.900665 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-config-data\") pod \"ac83a182-1841-4e64-9b31-f20e32917613\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.900724 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac83a182-1841-4e64-9b31-f20e32917613-logs\") pod \"ac83a182-1841-4e64-9b31-f20e32917613\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.900853 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-nova-metadata-tls-certs\") pod \"ac83a182-1841-4e64-9b31-f20e32917613\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.900928 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qcth\" (UniqueName: \"kubernetes.io/projected/ac83a182-1841-4e64-9b31-f20e32917613-kube-api-access-5qcth\") pod \"ac83a182-1841-4e64-9b31-f20e32917613\" (UID: \"ac83a182-1841-4e64-9b31-f20e32917613\") " Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.901977 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac83a182-1841-4e64-9b31-f20e32917613-logs" (OuterVolumeSpecName: "logs") pod "ac83a182-1841-4e64-9b31-f20e32917613" (UID: "ac83a182-1841-4e64-9b31-f20e32917613"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.918781 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac83a182-1841-4e64-9b31-f20e32917613-kube-api-access-5qcth" (OuterVolumeSpecName: "kube-api-access-5qcth") pod "ac83a182-1841-4e64-9b31-f20e32917613" (UID: "ac83a182-1841-4e64-9b31-f20e32917613"). InnerVolumeSpecName "kube-api-access-5qcth". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.958009 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac83a182-1841-4e64-9b31-f20e32917613" (UID: "ac83a182-1841-4e64-9b31-f20e32917613"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:11 crc kubenswrapper[4867]: I0214 04:34:11.978061 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-config-data" (OuterVolumeSpecName: "config-data") pod "ac83a182-1841-4e64-9b31-f20e32917613" (UID: "ac83a182-1841-4e64-9b31-f20e32917613"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.003063 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2gkt\" (UniqueName: \"kubernetes.io/projected/df373c99-9a99-4793-90ef-3ad7887e5e3e-kube-api-access-q2gkt\") pod \"df373c99-9a99-4793-90ef-3ad7887e5e3e\" (UID: \"df373c99-9a99-4793-90ef-3ad7887e5e3e\") " Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.003392 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-scripts\") pod \"df373c99-9a99-4793-90ef-3ad7887e5e3e\" (UID: \"df373c99-9a99-4793-90ef-3ad7887e5e3e\") " Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.003439 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-combined-ca-bundle\") pod \"df373c99-9a99-4793-90ef-3ad7887e5e3e\" (UID: \"df373c99-9a99-4793-90ef-3ad7887e5e3e\") " Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.003488 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-config-data\") pod \"df373c99-9a99-4793-90ef-3ad7887e5e3e\" (UID: \"df373c99-9a99-4793-90ef-3ad7887e5e3e\") " Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.003987 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.003999 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.004008 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac83a182-1841-4e64-9b31-f20e32917613-logs\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.004017 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qcth\" (UniqueName: \"kubernetes.io/projected/ac83a182-1841-4e64-9b31-f20e32917613-kube-api-access-5qcth\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.010685 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df373c99-9a99-4793-90ef-3ad7887e5e3e-kube-api-access-q2gkt" (OuterVolumeSpecName: "kube-api-access-q2gkt") pod "df373c99-9a99-4793-90ef-3ad7887e5e3e" (UID: "df373c99-9a99-4793-90ef-3ad7887e5e3e"). InnerVolumeSpecName "kube-api-access-q2gkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.012024 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ac83a182-1841-4e64-9b31-f20e32917613" (UID: "ac83a182-1841-4e64-9b31-f20e32917613"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.012787 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-scripts" (OuterVolumeSpecName: "scripts") pod "df373c99-9a99-4793-90ef-3ad7887e5e3e" (UID: "df373c99-9a99-4793-90ef-3ad7887e5e3e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.043553 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-config-data" (OuterVolumeSpecName: "config-data") pod "df373c99-9a99-4793-90ef-3ad7887e5e3e" (UID: "df373c99-9a99-4793-90ef-3ad7887e5e3e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.054441 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df373c99-9a99-4793-90ef-3ad7887e5e3e" (UID: "df373c99-9a99-4793-90ef-3ad7887e5e3e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.060789 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.106885 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.106920 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.106936 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df373c99-9a99-4793-90ef-3ad7887e5e3e-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.106948 4867 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac83a182-1841-4e64-9b31-f20e32917613-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.106960 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2gkt\" (UniqueName: \"kubernetes.io/projected/df373c99-9a99-4793-90ef-3ad7887e5e3e-kube-api-access-q2gkt\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.163903 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.208357 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmp66\" (UniqueName: \"kubernetes.io/projected/146fecda-f9b9-4c60-96a7-feb4120cda4c-kube-api-access-xmp66\") pod \"146fecda-f9b9-4c60-96a7-feb4120cda4c\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.208558 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/146fecda-f9b9-4c60-96a7-feb4120cda4c-log-httpd\") pod \"146fecda-f9b9-4c60-96a7-feb4120cda4c\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.208618 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-config-data\") pod \"146fecda-f9b9-4c60-96a7-feb4120cda4c\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.208716 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/146fecda-f9b9-4c60-96a7-feb4120cda4c-run-httpd\") pod \"146fecda-f9b9-4c60-96a7-feb4120cda4c\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.208795 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-scripts\") pod \"146fecda-f9b9-4c60-96a7-feb4120cda4c\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.208879 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-sg-core-conf-yaml\") pod \"146fecda-f9b9-4c60-96a7-feb4120cda4c\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.208967 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-combined-ca-bundle\") pod \"146fecda-f9b9-4c60-96a7-feb4120cda4c\" (UID: \"146fecda-f9b9-4c60-96a7-feb4120cda4c\") " Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.213188 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/146fecda-f9b9-4c60-96a7-feb4120cda4c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "146fecda-f9b9-4c60-96a7-feb4120cda4c" (UID: "146fecda-f9b9-4c60-96a7-feb4120cda4c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.215884 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/146fecda-f9b9-4c60-96a7-feb4120cda4c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "146fecda-f9b9-4c60-96a7-feb4120cda4c" (UID: "146fecda-f9b9-4c60-96a7-feb4120cda4c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.215918 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/146fecda-f9b9-4c60-96a7-feb4120cda4c-kube-api-access-xmp66" (OuterVolumeSpecName: "kube-api-access-xmp66") pod "146fecda-f9b9-4c60-96a7-feb4120cda4c" (UID: "146fecda-f9b9-4c60-96a7-feb4120cda4c"). InnerVolumeSpecName "kube-api-access-xmp66". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.216201 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-scripts" (OuterVolumeSpecName: "scripts") pod "146fecda-f9b9-4c60-96a7-feb4120cda4c" (UID: "146fecda-f9b9-4c60-96a7-feb4120cda4c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.225687 4867 generic.go:334] "Generic (PLEG): container finished" podID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerID="24e83f89f28d0ec5c2caa8639449270fad89c9f3a9ccd66267870d308ea41167" exitCode=137 Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.225786 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"146fecda-f9b9-4c60-96a7-feb4120cda4c","Type":"ContainerDied","Data":"24e83f89f28d0ec5c2caa8639449270fad89c9f3a9ccd66267870d308ea41167"} Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.225819 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"146fecda-f9b9-4c60-96a7-feb4120cda4c","Type":"ContainerDied","Data":"2a9b10b567b5808562253fe944271d1f75330bc923dcd36a8e5d5a2e2e2a94fb"} Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.225839 4867 scope.go:117] "RemoveContainer" containerID="24e83f89f28d0ec5c2caa8639449270fad89c9f3a9ccd66267870d308ea41167" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.226057 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.235303 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ac83a182-1841-4e64-9b31-f20e32917613","Type":"ContainerDied","Data":"17bd6501f265854a6cc4968c75a7bac955f83f1c413ca7aa976b818c26157d4b"} Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.235374 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.239706 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb","Type":"ContainerStarted","Data":"8eef4a09d30f75b09b2a4e941b5145891d9e9ba139549a7546f8625ba9359aed"} Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.249420 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-dnl28" event={"ID":"df373c99-9a99-4793-90ef-3ad7887e5e3e","Type":"ContainerDied","Data":"1fd83dc61097e21fab2d831bb4e520d45961c33509d79aff1a7bb6b26c09cb8b"} Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.249480 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fd83dc61097e21fab2d831bb4e520d45961c33509d79aff1a7bb6b26c09cb8b" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.249583 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-dnl28" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.265255 4867 scope.go:117] "RemoveContainer" containerID="36e2d712d4d8b9ed772106e7c47ea1eef658b8b8e9f298edfa74c23417b23cf7" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.285296 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "146fecda-f9b9-4c60-96a7-feb4120cda4c" (UID: "146fecda-f9b9-4c60-96a7-feb4120cda4c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.296001 4867 scope.go:117] "RemoveContainer" containerID="cf3e71140044d5d04aa52ab9b12ad81933fbe11148f05a7ee12b3ff9ed5ecd0b" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.308725 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.323212 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/146fecda-f9b9-4c60-96a7-feb4120cda4c-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.323245 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/146fecda-f9b9-4c60-96a7-feb4120cda4c-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.323254 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.323263 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.323272 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmp66\" (UniqueName: \"kubernetes.io/projected/146fecda-f9b9-4c60-96a7-feb4120cda4c-kube-api-access-xmp66\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.333496 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "146fecda-f9b9-4c60-96a7-feb4120cda4c" (UID: "146fecda-f9b9-4c60-96a7-feb4120cda4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.335072 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.345792 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:34:12 crc kubenswrapper[4867]: E0214 04:34:12.346243 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="proxy-httpd" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.346262 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="proxy-httpd" Feb 14 04:34:12 crc kubenswrapper[4867]: E0214 04:34:12.346281 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="ceilometer-central-agent" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.346290 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="ceilometer-central-agent" Feb 14 04:34:12 crc kubenswrapper[4867]: E0214 04:34:12.346314 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="sg-core" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.346320 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="sg-core" Feb 14 04:34:12 crc kubenswrapper[4867]: E0214 04:34:12.346331 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="ceilometer-notification-agent" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.346337 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="ceilometer-notification-agent" Feb 14 04:34:12 crc kubenswrapper[4867]: E0214 04:34:12.346346 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac83a182-1841-4e64-9b31-f20e32917613" containerName="nova-metadata-metadata" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.346353 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac83a182-1841-4e64-9b31-f20e32917613" containerName="nova-metadata-metadata" Feb 14 04:34:12 crc kubenswrapper[4867]: E0214 04:34:12.346372 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac83a182-1841-4e64-9b31-f20e32917613" containerName="nova-metadata-log" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.346378 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac83a182-1841-4e64-9b31-f20e32917613" containerName="nova-metadata-log" Feb 14 04:34:12 crc kubenswrapper[4867]: E0214 04:34:12.346395 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df373c99-9a99-4793-90ef-3ad7887e5e3e" containerName="aodh-db-sync" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.346401 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="df373c99-9a99-4793-90ef-3ad7887e5e3e" containerName="aodh-db-sync" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.346614 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac83a182-1841-4e64-9b31-f20e32917613" containerName="nova-metadata-log" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.346624 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="sg-core" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.346640 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="ceilometer-central-agent" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.346646 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="proxy-httpd" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.346661 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac83a182-1841-4e64-9b31-f20e32917613" containerName="nova-metadata-metadata" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.346673 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="df373c99-9a99-4793-90ef-3ad7887e5e3e" containerName="aodh-db-sync" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.346686 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" containerName="ceilometer-notification-agent" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.347897 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.352458 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.352544 4867 scope.go:117] "RemoveContainer" containerID="384d5807f9dd88aadba8af524a64d3e00b94913efc05bfc5c451124ddaedb1d6" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.352616 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.390087 4867 scope.go:117] "RemoveContainer" containerID="24e83f89f28d0ec5c2caa8639449270fad89c9f3a9ccd66267870d308ea41167" Feb 14 04:34:12 crc kubenswrapper[4867]: E0214 04:34:12.392813 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24e83f89f28d0ec5c2caa8639449270fad89c9f3a9ccd66267870d308ea41167\": container with ID starting with 24e83f89f28d0ec5c2caa8639449270fad89c9f3a9ccd66267870d308ea41167 not found: ID does not exist" containerID="24e83f89f28d0ec5c2caa8639449270fad89c9f3a9ccd66267870d308ea41167" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.392865 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24e83f89f28d0ec5c2caa8639449270fad89c9f3a9ccd66267870d308ea41167"} err="failed to get container status \"24e83f89f28d0ec5c2caa8639449270fad89c9f3a9ccd66267870d308ea41167\": rpc error: code = NotFound desc = could not find container \"24e83f89f28d0ec5c2caa8639449270fad89c9f3a9ccd66267870d308ea41167\": container with ID starting with 24e83f89f28d0ec5c2caa8639449270fad89c9f3a9ccd66267870d308ea41167 not found: ID does not exist" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.392899 4867 scope.go:117] "RemoveContainer" containerID="36e2d712d4d8b9ed772106e7c47ea1eef658b8b8e9f298edfa74c23417b23cf7" Feb 14 04:34:12 crc kubenswrapper[4867]: E0214 04:34:12.394130 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36e2d712d4d8b9ed772106e7c47ea1eef658b8b8e9f298edfa74c23417b23cf7\": container with ID starting with 36e2d712d4d8b9ed772106e7c47ea1eef658b8b8e9f298edfa74c23417b23cf7 not found: ID does not exist" containerID="36e2d712d4d8b9ed772106e7c47ea1eef658b8b8e9f298edfa74c23417b23cf7" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.394161 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36e2d712d4d8b9ed772106e7c47ea1eef658b8b8e9f298edfa74c23417b23cf7"} err="failed to get container status \"36e2d712d4d8b9ed772106e7c47ea1eef658b8b8e9f298edfa74c23417b23cf7\": rpc error: code = NotFound desc = could not find container \"36e2d712d4d8b9ed772106e7c47ea1eef658b8b8e9f298edfa74c23417b23cf7\": container with ID starting with 36e2d712d4d8b9ed772106e7c47ea1eef658b8b8e9f298edfa74c23417b23cf7 not found: ID does not exist" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.394183 4867 scope.go:117] "RemoveContainer" containerID="cf3e71140044d5d04aa52ab9b12ad81933fbe11148f05a7ee12b3ff9ed5ecd0b" Feb 14 04:34:12 crc kubenswrapper[4867]: E0214 04:34:12.395689 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf3e71140044d5d04aa52ab9b12ad81933fbe11148f05a7ee12b3ff9ed5ecd0b\": container with ID starting with cf3e71140044d5d04aa52ab9b12ad81933fbe11148f05a7ee12b3ff9ed5ecd0b not found: ID does not exist" containerID="cf3e71140044d5d04aa52ab9b12ad81933fbe11148f05a7ee12b3ff9ed5ecd0b" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.395730 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf3e71140044d5d04aa52ab9b12ad81933fbe11148f05a7ee12b3ff9ed5ecd0b"} err="failed to get container status \"cf3e71140044d5d04aa52ab9b12ad81933fbe11148f05a7ee12b3ff9ed5ecd0b\": rpc error: code = NotFound desc = could not find container \"cf3e71140044d5d04aa52ab9b12ad81933fbe11148f05a7ee12b3ff9ed5ecd0b\": container with ID starting with cf3e71140044d5d04aa52ab9b12ad81933fbe11148f05a7ee12b3ff9ed5ecd0b not found: ID does not exist" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.395755 4867 scope.go:117] "RemoveContainer" containerID="384d5807f9dd88aadba8af524a64d3e00b94913efc05bfc5c451124ddaedb1d6" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.409275 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:34:12 crc kubenswrapper[4867]: E0214 04:34:12.416429 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"384d5807f9dd88aadba8af524a64d3e00b94913efc05bfc5c451124ddaedb1d6\": container with ID starting with 384d5807f9dd88aadba8af524a64d3e00b94913efc05bfc5c451124ddaedb1d6 not found: ID does not exist" containerID="384d5807f9dd88aadba8af524a64d3e00b94913efc05bfc5c451124ddaedb1d6" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.416477 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"384d5807f9dd88aadba8af524a64d3e00b94913efc05bfc5c451124ddaedb1d6"} err="failed to get container status \"384d5807f9dd88aadba8af524a64d3e00b94913efc05bfc5c451124ddaedb1d6\": rpc error: code = NotFound desc = could not find container \"384d5807f9dd88aadba8af524a64d3e00b94913efc05bfc5c451124ddaedb1d6\": container with ID starting with 384d5807f9dd88aadba8af524a64d3e00b94913efc05bfc5c451124ddaedb1d6 not found: ID does not exist" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.416532 4867 scope.go:117] "RemoveContainer" containerID="c5f4d2ce383f399374bc58d1584dbdd0becb6b82315f169b3563b08eb3f414d1" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.428932 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szntz\" (UniqueName: \"kubernetes.io/projected/35a6b709-4f80-4abc-a92f-24a43d09a805-kube-api-access-szntz\") pod \"nova-metadata-0\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.429558 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-config-data\") pod \"nova-metadata-0\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.429820 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35a6b709-4f80-4abc-a92f-24a43d09a805-logs\") pod \"nova-metadata-0\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.429999 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.430043 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.438442 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.453312 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-config-data" (OuterVolumeSpecName: "config-data") pod "146fecda-f9b9-4c60-96a7-feb4120cda4c" (UID: "146fecda-f9b9-4c60-96a7-feb4120cda4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.469996 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.541335 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szntz\" (UniqueName: \"kubernetes.io/projected/35a6b709-4f80-4abc-a92f-24a43d09a805-kube-api-access-szntz\") pod \"nova-metadata-0\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.541492 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-config-data\") pod \"nova-metadata-0\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.541605 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35a6b709-4f80-4abc-a92f-24a43d09a805-logs\") pod \"nova-metadata-0\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.541696 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.541728 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.541827 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/146fecda-f9b9-4c60-96a7-feb4120cda4c-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.543451 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35a6b709-4f80-4abc-a92f-24a43d09a805-logs\") pod \"nova-metadata-0\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.547495 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-config-data\") pod \"nova-metadata-0\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.547758 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.548379 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.560096 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szntz\" (UniqueName: \"kubernetes.io/projected/35a6b709-4f80-4abc-a92f-24a43d09a805-kube-api-access-szntz\") pod \"nova-metadata-0\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.654320 4867 scope.go:117] "RemoveContainer" containerID="e338dd6321b7cc373e6d70dc187a67843992c598fb81afefb40eee13511f4c40" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.667750 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.710590 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.738302 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.770044 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.773074 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.778312 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.788898 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.792714 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.867843 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dca43c59-5d18-4f9d-bb72-49460d8d691f-run-httpd\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.868148 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-config-data\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.868231 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-scripts\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.868281 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.868301 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.868331 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dca43c59-5d18-4f9d-bb72-49460d8d691f-log-httpd\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.868391 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdfnq\" (UniqueName: \"kubernetes.io/projected/dca43c59-5d18-4f9d-bb72-49460d8d691f-kube-api-access-hdfnq\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.972368 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-scripts\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.972450 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.972474 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.972519 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dca43c59-5d18-4f9d-bb72-49460d8d691f-log-httpd\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.972582 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdfnq\" (UniqueName: \"kubernetes.io/projected/dca43c59-5d18-4f9d-bb72-49460d8d691f-kube-api-access-hdfnq\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.972643 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dca43c59-5d18-4f9d-bb72-49460d8d691f-run-httpd\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.972661 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-config-data\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.976268 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dca43c59-5d18-4f9d-bb72-49460d8d691f-log-httpd\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.976420 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dca43c59-5d18-4f9d-bb72-49460d8d691f-run-httpd\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.981918 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.982204 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.982449 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-scripts\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.982649 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-config-data\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:12 crc kubenswrapper[4867]: I0214 04:34:12.994193 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdfnq\" (UniqueName: \"kubernetes.io/projected/dca43c59-5d18-4f9d-bb72-49460d8d691f-kube-api-access-hdfnq\") pod \"ceilometer-0\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " pod="openstack/ceilometer-0" Feb 14 04:34:13 crc kubenswrapper[4867]: I0214 04:34:13.012274 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="146fecda-f9b9-4c60-96a7-feb4120cda4c" path="/var/lib/kubelet/pods/146fecda-f9b9-4c60-96a7-feb4120cda4c/volumes" Feb 14 04:34:13 crc kubenswrapper[4867]: I0214 04:34:13.014008 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac83a182-1841-4e64-9b31-f20e32917613" path="/var/lib/kubelet/pods/ac83a182-1841-4e64-9b31-f20e32917613/volumes" Feb 14 04:34:13 crc kubenswrapper[4867]: I0214 04:34:13.187601 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:34:13 crc kubenswrapper[4867]: W0214 04:34:13.259161 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35a6b709_4f80_4abc_a92f_24a43d09a805.slice/crio-e4082bbcd5482c7b8248419bd578fb69fd35b9f6097377273153ca13ce980a74 WatchSource:0}: Error finding container e4082bbcd5482c7b8248419bd578fb69fd35b9f6097377273153ca13ce980a74: Status 404 returned error can't find the container with id e4082bbcd5482c7b8248419bd578fb69fd35b9f6097377273153ca13ce980a74 Feb 14 04:34:13 crc kubenswrapper[4867]: I0214 04:34:13.261071 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:34:13 crc kubenswrapper[4867]: I0214 04:34:13.314631 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e367f188-2aa4-4374-a768-92b8e463e40d","Type":"ContainerStarted","Data":"23e053e61533d60d688ce5e0075d32a24d5d784bcd101b0ac198cf0073c4215e"} Feb 14 04:34:13 crc kubenswrapper[4867]: I0214 04:34:13.314698 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"e367f188-2aa4-4374-a768-92b8e463e40d","Type":"ContainerStarted","Data":"e09a2563af19b7a141e2095e4a362e700d96702314303525282068762494921d"} Feb 14 04:34:13 crc kubenswrapper[4867]: I0214 04:34:13.317642 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 14 04:34:13 crc kubenswrapper[4867]: I0214 04:34:13.329381 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb","Type":"ContainerStarted","Data":"3bf6de6c41ec2894ac7f99d62ff3f51ff5cc922eed2592185cfef6d65b82aff9"} Feb 14 04:34:13 crc kubenswrapper[4867]: I0214 04:34:13.329438 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb","Type":"ContainerStarted","Data":"6a17090aa1f1970c7506d253b3e201ba17d075849c21455fa12bc2d248778b84"} Feb 14 04:34:13 crc kubenswrapper[4867]: I0214 04:34:13.368202 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.368179358 podStartE2EDuration="2.368179358s" podCreationTimestamp="2026-02-14 04:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:34:13.337250357 +0000 UTC m=+1485.418187681" watchObservedRunningTime="2026-02-14 04:34:13.368179358 +0000 UTC m=+1485.449116672" Feb 14 04:34:13 crc kubenswrapper[4867]: I0214 04:34:13.423499 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.423462444 podStartE2EDuration="3.423462444s" podCreationTimestamp="2026-02-14 04:34:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:34:13.389728267 +0000 UTC m=+1485.470665581" watchObservedRunningTime="2026-02-14 04:34:13.423462444 +0000 UTC m=+1485.504399758" Feb 14 04:34:13 crc kubenswrapper[4867]: I0214 04:34:13.737347 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:13 crc kubenswrapper[4867]: W0214 04:34:13.740692 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddca43c59_5d18_4f9d_bb72_49460d8d691f.slice/crio-d034009cb6942bfb4489567285645544a177850350cdc4d9dd4b67c0404cdf70 WatchSource:0}: Error finding container d034009cb6942bfb4489567285645544a177850350cdc4d9dd4b67c0404cdf70: Status 404 returned error can't find the container with id d034009cb6942bfb4489567285645544a177850350cdc4d9dd4b67c0404cdf70 Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.342702 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dca43c59-5d18-4f9d-bb72-49460d8d691f","Type":"ContainerStarted","Data":"d034009cb6942bfb4489567285645544a177850350cdc4d9dd4b67c0404cdf70"} Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.345297 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35a6b709-4f80-4abc-a92f-24a43d09a805","Type":"ContainerStarted","Data":"4f20ac204fec7521d0bfa644dbcfa122f64c1e1b5d03b1c1422d51607f747fbe"} Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.345328 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35a6b709-4f80-4abc-a92f-24a43d09a805","Type":"ContainerStarted","Data":"fe2d375b29861eadad2b7db855fe51b64530824fb04ec1810859342237673233"} Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.345341 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35a6b709-4f80-4abc-a92f-24a43d09a805","Type":"ContainerStarted","Data":"e4082bbcd5482c7b8248419bd578fb69fd35b9f6097377273153ca13ce980a74"} Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.380578 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.38055693 podStartE2EDuration="2.38055693s" podCreationTimestamp="2026-02-14 04:34:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:34:14.363142762 +0000 UTC m=+1486.444080066" watchObservedRunningTime="2026-02-14 04:34:14.38055693 +0000 UTC m=+1486.461494244" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.568615 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.572551 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.578024 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.578358 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.583478 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-bzvlt" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.599609 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.619829 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv5cq\" (UniqueName: \"kubernetes.io/projected/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-kube-api-access-gv5cq\") pod \"aodh-0\" (UID: \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\") " pod="openstack/aodh-0" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.620013 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-scripts\") pod \"aodh-0\" (UID: \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\") " pod="openstack/aodh-0" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.620094 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-config-data\") pod \"aodh-0\" (UID: \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\") " pod="openstack/aodh-0" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.620130 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-combined-ca-bundle\") pod \"aodh-0\" (UID: \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\") " pod="openstack/aodh-0" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.722980 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-scripts\") pod \"aodh-0\" (UID: \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\") " pod="openstack/aodh-0" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.723086 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-config-data\") pod \"aodh-0\" (UID: \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\") " pod="openstack/aodh-0" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.723126 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-combined-ca-bundle\") pod \"aodh-0\" (UID: \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\") " pod="openstack/aodh-0" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.723173 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv5cq\" (UniqueName: \"kubernetes.io/projected/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-kube-api-access-gv5cq\") pod \"aodh-0\" (UID: \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\") " pod="openstack/aodh-0" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.729241 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-combined-ca-bundle\") pod \"aodh-0\" (UID: \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\") " pod="openstack/aodh-0" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.730730 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-config-data\") pod \"aodh-0\" (UID: \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\") " pod="openstack/aodh-0" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.741734 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv5cq\" (UniqueName: \"kubernetes.io/projected/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-kube-api-access-gv5cq\") pod \"aodh-0\" (UID: \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\") " pod="openstack/aodh-0" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.743165 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-scripts\") pod \"aodh-0\" (UID: \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\") " pod="openstack/aodh-0" Feb 14 04:34:14 crc kubenswrapper[4867]: I0214 04:34:14.907072 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 04:34:15 crc kubenswrapper[4867]: I0214 04:34:15.372755 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dca43c59-5d18-4f9d-bb72-49460d8d691f","Type":"ContainerStarted","Data":"dd7577aaaaab45999752b1f4efb80ed248e3f9a60ebc38d3fa23086bf2d9e0fe"} Feb 14 04:34:15 crc kubenswrapper[4867]: I0214 04:34:15.373156 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dca43c59-5d18-4f9d-bb72-49460d8d691f","Type":"ContainerStarted","Data":"a0bf448b9af2da9137bfe6fd50f230b11043ba68729b4c2385f16d8c94be6d14"} Feb 14 04:34:15 crc kubenswrapper[4867]: I0214 04:34:15.535123 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 14 04:34:16 crc kubenswrapper[4867]: I0214 04:34:16.420395 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dca43c59-5d18-4f9d-bb72-49460d8d691f","Type":"ContainerStarted","Data":"14a477b5f8abacdda560860647515fd1269df06f35bc862bd440744b72123dfc"} Feb 14 04:34:16 crc kubenswrapper[4867]: I0214 04:34:16.426692 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3b8b8297-e7e9-4d4e-9fbf-8aa302601521","Type":"ContainerStarted","Data":"389edd9377562dde5f7fe2a4c07b6137629b507c4f69fc65a4a622c3e66a0b90"} Feb 14 04:34:16 crc kubenswrapper[4867]: I0214 04:34:16.426751 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3b8b8297-e7e9-4d4e-9fbf-8aa302601521","Type":"ContainerStarted","Data":"873489133de3c353c9f8ca313cc4a323ae602d5913923a1f3148b8aae71c2510"} Feb 14 04:34:17 crc kubenswrapper[4867]: I0214 04:34:17.463626 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dca43c59-5d18-4f9d-bb72-49460d8d691f","Type":"ContainerStarted","Data":"0cabda8eaa1316182eb67ad8e8e3fc2742a5e7f04936e1e1543c720ea2363d35"} Feb 14 04:34:17 crc kubenswrapper[4867]: I0214 04:34:17.464180 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 04:34:17 crc kubenswrapper[4867]: I0214 04:34:17.498133 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.195903818 podStartE2EDuration="5.498110464s" podCreationTimestamp="2026-02-14 04:34:12 +0000 UTC" firstStartedPulling="2026-02-14 04:34:13.744038187 +0000 UTC m=+1485.824975501" lastFinishedPulling="2026-02-14 04:34:17.046244833 +0000 UTC m=+1489.127182147" observedRunningTime="2026-02-14 04:34:17.489906324 +0000 UTC m=+1489.570843638" watchObservedRunningTime="2026-02-14 04:34:17.498110464 +0000 UTC m=+1489.579047778" Feb 14 04:34:17 crc kubenswrapper[4867]: I0214 04:34:17.669523 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 14 04:34:17 crc kubenswrapper[4867]: I0214 04:34:17.669586 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 14 04:34:18 crc kubenswrapper[4867]: I0214 04:34:18.390191 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 14 04:34:18 crc kubenswrapper[4867]: I0214 04:34:18.908844 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.501777 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3b8b8297-e7e9-4d4e-9fbf-8aa302601521","Type":"ContainerStarted","Data":"9248cc350ed932fdee6220c9e37ba117089264f71d0581c8a1792aace4facbcb"} Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.501966 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerName="ceilometer-central-agent" containerID="cri-o://a0bf448b9af2da9137bfe6fd50f230b11043ba68729b4c2385f16d8c94be6d14" gracePeriod=30 Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.502024 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerName="proxy-httpd" containerID="cri-o://0cabda8eaa1316182eb67ad8e8e3fc2742a5e7f04936e1e1543c720ea2363d35" gracePeriod=30 Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.502125 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerName="sg-core" containerID="cri-o://14a477b5f8abacdda560860647515fd1269df06f35bc862bd440744b72123dfc" gracePeriod=30 Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.502149 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerName="ceilometer-notification-agent" containerID="cri-o://dd7577aaaaab45999752b1f4efb80ed248e3f9a60ebc38d3fa23086bf2d9e0fe" gracePeriod=30 Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.536683 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8w8t2"] Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.539370 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.553142 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8w8t2"] Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.665260 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz947\" (UniqueName: \"kubernetes.io/projected/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-kube-api-access-kz947\") pod \"redhat-operators-8w8t2\" (UID: \"07a0a67f-28d7-4aa6-872b-a0223c46a9ce\") " pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.665353 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-catalog-content\") pod \"redhat-operators-8w8t2\" (UID: \"07a0a67f-28d7-4aa6-872b-a0223c46a9ce\") " pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.665526 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-utilities\") pod \"redhat-operators-8w8t2\" (UID: \"07a0a67f-28d7-4aa6-872b-a0223c46a9ce\") " pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.768133 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-catalog-content\") pod \"redhat-operators-8w8t2\" (UID: \"07a0a67f-28d7-4aa6-872b-a0223c46a9ce\") " pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.768324 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-utilities\") pod \"redhat-operators-8w8t2\" (UID: \"07a0a67f-28d7-4aa6-872b-a0223c46a9ce\") " pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.768484 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz947\" (UniqueName: \"kubernetes.io/projected/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-kube-api-access-kz947\") pod \"redhat-operators-8w8t2\" (UID: \"07a0a67f-28d7-4aa6-872b-a0223c46a9ce\") " pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.768689 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-catalog-content\") pod \"redhat-operators-8w8t2\" (UID: \"07a0a67f-28d7-4aa6-872b-a0223c46a9ce\") " pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.768788 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-utilities\") pod \"redhat-operators-8w8t2\" (UID: \"07a0a67f-28d7-4aa6-872b-a0223c46a9ce\") " pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.789077 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz947\" (UniqueName: \"kubernetes.io/projected/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-kube-api-access-kz947\") pod \"redhat-operators-8w8t2\" (UID: \"07a0a67f-28d7-4aa6-872b-a0223c46a9ce\") " pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:34:19 crc kubenswrapper[4867]: I0214 04:34:19.869748 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:34:20 crc kubenswrapper[4867]: I0214 04:34:20.523685 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3b8b8297-e7e9-4d4e-9fbf-8aa302601521","Type":"ContainerStarted","Data":"f7c20be58a69fd5c190fa1d934c18d6f79089308881712b0a2523c6851d81171"} Feb 14 04:34:20 crc kubenswrapper[4867]: I0214 04:34:20.527282 4867 generic.go:334] "Generic (PLEG): container finished" podID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerID="0cabda8eaa1316182eb67ad8e8e3fc2742a5e7f04936e1e1543c720ea2363d35" exitCode=0 Feb 14 04:34:20 crc kubenswrapper[4867]: I0214 04:34:20.527310 4867 generic.go:334] "Generic (PLEG): container finished" podID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerID="14a477b5f8abacdda560860647515fd1269df06f35bc862bd440744b72123dfc" exitCode=2 Feb 14 04:34:20 crc kubenswrapper[4867]: I0214 04:34:20.527320 4867 generic.go:334] "Generic (PLEG): container finished" podID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerID="dd7577aaaaab45999752b1f4efb80ed248e3f9a60ebc38d3fa23086bf2d9e0fe" exitCode=0 Feb 14 04:34:20 crc kubenswrapper[4867]: I0214 04:34:20.527335 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dca43c59-5d18-4f9d-bb72-49460d8d691f","Type":"ContainerDied","Data":"0cabda8eaa1316182eb67ad8e8e3fc2742a5e7f04936e1e1543c720ea2363d35"} Feb 14 04:34:20 crc kubenswrapper[4867]: I0214 04:34:20.527368 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dca43c59-5d18-4f9d-bb72-49460d8d691f","Type":"ContainerDied","Data":"14a477b5f8abacdda560860647515fd1269df06f35bc862bd440744b72123dfc"} Feb 14 04:34:20 crc kubenswrapper[4867]: I0214 04:34:20.527381 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dca43c59-5d18-4f9d-bb72-49460d8d691f","Type":"ContainerDied","Data":"dd7577aaaaab45999752b1f4efb80ed248e3f9a60ebc38d3fa23086bf2d9e0fe"} Feb 14 04:34:20 crc kubenswrapper[4867]: I0214 04:34:20.688598 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8w8t2"] Feb 14 04:34:20 crc kubenswrapper[4867]: W0214 04:34:20.701975 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07a0a67f_28d7_4aa6_872b_a0223c46a9ce.slice/crio-fdac00fce6c9717e1c8d18f0be51e81e7fbc0a9225c4838a2047a292e8ab0896 WatchSource:0}: Error finding container fdac00fce6c9717e1c8d18f0be51e81e7fbc0a9225c4838a2047a292e8ab0896: Status 404 returned error can't find the container with id fdac00fce6c9717e1c8d18f0be51e81e7fbc0a9225c4838a2047a292e8ab0896 Feb 14 04:34:21 crc kubenswrapper[4867]: I0214 04:34:21.079412 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 04:34:21 crc kubenswrapper[4867]: I0214 04:34:21.079892 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 04:34:21 crc kubenswrapper[4867]: I0214 04:34:21.564234 4867 generic.go:334] "Generic (PLEG): container finished" podID="07a0a67f-28d7-4aa6-872b-a0223c46a9ce" containerID="bcc64d905c4e5f9d636eab2cf199fd810c50163cc6446c91352e060a5a3e42fd" exitCode=0 Feb 14 04:34:21 crc kubenswrapper[4867]: I0214 04:34:21.564277 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8w8t2" event={"ID":"07a0a67f-28d7-4aa6-872b-a0223c46a9ce","Type":"ContainerDied","Data":"bcc64d905c4e5f9d636eab2cf199fd810c50163cc6446c91352e060a5a3e42fd"} Feb 14 04:34:21 crc kubenswrapper[4867]: I0214 04:34:21.564305 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8w8t2" event={"ID":"07a0a67f-28d7-4aa6-872b-a0223c46a9ce","Type":"ContainerStarted","Data":"fdac00fce6c9717e1c8d18f0be51e81e7fbc0a9225c4838a2047a292e8ab0896"} Feb 14 04:34:21 crc kubenswrapper[4867]: I0214 04:34:21.808788 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 14 04:34:22 crc kubenswrapper[4867]: I0214 04:34:22.162728 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.246:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 04:34:22 crc kubenswrapper[4867]: I0214 04:34:22.162856 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.246:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 04:34:22 crc kubenswrapper[4867]: I0214 04:34:22.669216 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 14 04:34:22 crc kubenswrapper[4867]: I0214 04:34:22.669283 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 14 04:34:23 crc kubenswrapper[4867]: I0214 04:34:23.587386 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8w8t2" event={"ID":"07a0a67f-28d7-4aa6-872b-a0223c46a9ce","Type":"ContainerStarted","Data":"7d63f285d67f04fff738be38ba2678cb46d4e846ee48b03b6257c8a564337d5d"} Feb 14 04:34:23 crc kubenswrapper[4867]: I0214 04:34:23.591062 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3b8b8297-e7e9-4d4e-9fbf-8aa302601521","Type":"ContainerStarted","Data":"676b44febd2b1e6f8adc3b36dfacb2ca3ffd9bcd4f9a33888b2b7f58cb54f5e2"} Feb 14 04:34:23 crc kubenswrapper[4867]: I0214 04:34:23.591211 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerName="aodh-api" containerID="cri-o://389edd9377562dde5f7fe2a4c07b6137629b507c4f69fc65a4a622c3e66a0b90" gracePeriod=30 Feb 14 04:34:23 crc kubenswrapper[4867]: I0214 04:34:23.591305 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerName="aodh-listener" containerID="cri-o://676b44febd2b1e6f8adc3b36dfacb2ca3ffd9bcd4f9a33888b2b7f58cb54f5e2" gracePeriod=30 Feb 14 04:34:23 crc kubenswrapper[4867]: I0214 04:34:23.591357 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerName="aodh-notifier" containerID="cri-o://f7c20be58a69fd5c190fa1d934c18d6f79089308881712b0a2523c6851d81171" gracePeriod=30 Feb 14 04:34:23 crc kubenswrapper[4867]: I0214 04:34:23.591395 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerName="aodh-evaluator" containerID="cri-o://9248cc350ed932fdee6220c9e37ba117089264f71d0581c8a1792aace4facbcb" gracePeriod=30 Feb 14 04:34:23 crc kubenswrapper[4867]: I0214 04:34:23.658371 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.236014663 podStartE2EDuration="9.658346533s" podCreationTimestamp="2026-02-14 04:34:14 +0000 UTC" firstStartedPulling="2026-02-14 04:34:15.539142969 +0000 UTC m=+1487.620080283" lastFinishedPulling="2026-02-14 04:34:22.961474839 +0000 UTC m=+1495.042412153" observedRunningTime="2026-02-14 04:34:23.642038715 +0000 UTC m=+1495.722976029" watchObservedRunningTime="2026-02-14 04:34:23.658346533 +0000 UTC m=+1495.739283837" Feb 14 04:34:23 crc kubenswrapper[4867]: I0214 04:34:23.691784 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="35a6b709-4f80-4abc-a92f-24a43d09a805" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.248:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 04:34:23 crc kubenswrapper[4867]: I0214 04:34:23.691996 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="35a6b709-4f80-4abc-a92f-24a43d09a805" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.248:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.176363 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.311964 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-config-data\") pod \"dca43c59-5d18-4f9d-bb72-49460d8d691f\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.312060 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-sg-core-conf-yaml\") pod \"dca43c59-5d18-4f9d-bb72-49460d8d691f\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.312304 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-combined-ca-bundle\") pod \"dca43c59-5d18-4f9d-bb72-49460d8d691f\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.312330 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-scripts\") pod \"dca43c59-5d18-4f9d-bb72-49460d8d691f\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.312366 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dca43c59-5d18-4f9d-bb72-49460d8d691f-run-httpd\") pod \"dca43c59-5d18-4f9d-bb72-49460d8d691f\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.312428 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dca43c59-5d18-4f9d-bb72-49460d8d691f-log-httpd\") pod \"dca43c59-5d18-4f9d-bb72-49460d8d691f\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.312453 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdfnq\" (UniqueName: \"kubernetes.io/projected/dca43c59-5d18-4f9d-bb72-49460d8d691f-kube-api-access-hdfnq\") pod \"dca43c59-5d18-4f9d-bb72-49460d8d691f\" (UID: \"dca43c59-5d18-4f9d-bb72-49460d8d691f\") " Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.314343 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dca43c59-5d18-4f9d-bb72-49460d8d691f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dca43c59-5d18-4f9d-bb72-49460d8d691f" (UID: "dca43c59-5d18-4f9d-bb72-49460d8d691f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.314443 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dca43c59-5d18-4f9d-bb72-49460d8d691f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dca43c59-5d18-4f9d-bb72-49460d8d691f" (UID: "dca43c59-5d18-4f9d-bb72-49460d8d691f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.320641 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dca43c59-5d18-4f9d-bb72-49460d8d691f-kube-api-access-hdfnq" (OuterVolumeSpecName: "kube-api-access-hdfnq") pod "dca43c59-5d18-4f9d-bb72-49460d8d691f" (UID: "dca43c59-5d18-4f9d-bb72-49460d8d691f"). InnerVolumeSpecName "kube-api-access-hdfnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.320812 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-scripts" (OuterVolumeSpecName: "scripts") pod "dca43c59-5d18-4f9d-bb72-49460d8d691f" (UID: "dca43c59-5d18-4f9d-bb72-49460d8d691f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.415343 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.415376 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dca43c59-5d18-4f9d-bb72-49460d8d691f-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.415386 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dca43c59-5d18-4f9d-bb72-49460d8d691f-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.415395 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdfnq\" (UniqueName: \"kubernetes.io/projected/dca43c59-5d18-4f9d-bb72-49460d8d691f-kube-api-access-hdfnq\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.417705 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dca43c59-5d18-4f9d-bb72-49460d8d691f" (UID: "dca43c59-5d18-4f9d-bb72-49460d8d691f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.445925 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dca43c59-5d18-4f9d-bb72-49460d8d691f" (UID: "dca43c59-5d18-4f9d-bb72-49460d8d691f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.503615 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-config-data" (OuterVolumeSpecName: "config-data") pod "dca43c59-5d18-4f9d-bb72-49460d8d691f" (UID: "dca43c59-5d18-4f9d-bb72-49460d8d691f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.518235 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.518272 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.518282 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dca43c59-5d18-4f9d-bb72-49460d8d691f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.602333 4867 generic.go:334] "Generic (PLEG): container finished" podID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerID="a0bf448b9af2da9137bfe6fd50f230b11043ba68729b4c2385f16d8c94be6d14" exitCode=0 Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.602402 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dca43c59-5d18-4f9d-bb72-49460d8d691f","Type":"ContainerDied","Data":"a0bf448b9af2da9137bfe6fd50f230b11043ba68729b4c2385f16d8c94be6d14"} Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.602432 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dca43c59-5d18-4f9d-bb72-49460d8d691f","Type":"ContainerDied","Data":"d034009cb6942bfb4489567285645544a177850350cdc4d9dd4b67c0404cdf70"} Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.602448 4867 scope.go:117] "RemoveContainer" containerID="0cabda8eaa1316182eb67ad8e8e3fc2742a5e7f04936e1e1543c720ea2363d35" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.602609 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.617721 4867 generic.go:334] "Generic (PLEG): container finished" podID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerID="9248cc350ed932fdee6220c9e37ba117089264f71d0581c8a1792aace4facbcb" exitCode=0 Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.617770 4867 generic.go:334] "Generic (PLEG): container finished" podID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerID="389edd9377562dde5f7fe2a4c07b6137629b507c4f69fc65a4a622c3e66a0b90" exitCode=0 Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.617794 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3b8b8297-e7e9-4d4e-9fbf-8aa302601521","Type":"ContainerDied","Data":"9248cc350ed932fdee6220c9e37ba117089264f71d0581c8a1792aace4facbcb"} Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.617847 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3b8b8297-e7e9-4d4e-9fbf-8aa302601521","Type":"ContainerDied","Data":"389edd9377562dde5f7fe2a4c07b6137629b507c4f69fc65a4a622c3e66a0b90"} Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.652467 4867 scope.go:117] "RemoveContainer" containerID="14a477b5f8abacdda560860647515fd1269df06f35bc862bd440744b72123dfc" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.652912 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.673327 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.693592 4867 scope.go:117] "RemoveContainer" containerID="dd7577aaaaab45999752b1f4efb80ed248e3f9a60ebc38d3fa23086bf2d9e0fe" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.733036 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:24 crc kubenswrapper[4867]: E0214 04:34:24.734831 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerName="ceilometer-central-agent" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.734857 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerName="ceilometer-central-agent" Feb 14 04:34:24 crc kubenswrapper[4867]: E0214 04:34:24.734872 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerName="ceilometer-notification-agent" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.734878 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerName="ceilometer-notification-agent" Feb 14 04:34:24 crc kubenswrapper[4867]: E0214 04:34:24.734903 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerName="proxy-httpd" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.734909 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerName="proxy-httpd" Feb 14 04:34:24 crc kubenswrapper[4867]: E0214 04:34:24.734945 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerName="sg-core" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.734951 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerName="sg-core" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.735192 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerName="proxy-httpd" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.735211 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerName="ceilometer-central-agent" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.735222 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerName="sg-core" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.735232 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" containerName="ceilometer-notification-agent" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.737547 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.742172 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.742599 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.749703 4867 scope.go:117] "RemoveContainer" containerID="a0bf448b9af2da9137bfe6fd50f230b11043ba68729b4c2385f16d8c94be6d14" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.749881 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.820353 4867 scope.go:117] "RemoveContainer" containerID="0cabda8eaa1316182eb67ad8e8e3fc2742a5e7f04936e1e1543c720ea2363d35" Feb 14 04:34:24 crc kubenswrapper[4867]: E0214 04:34:24.821183 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cabda8eaa1316182eb67ad8e8e3fc2742a5e7f04936e1e1543c720ea2363d35\": container with ID starting with 0cabda8eaa1316182eb67ad8e8e3fc2742a5e7f04936e1e1543c720ea2363d35 not found: ID does not exist" containerID="0cabda8eaa1316182eb67ad8e8e3fc2742a5e7f04936e1e1543c720ea2363d35" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.821268 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cabda8eaa1316182eb67ad8e8e3fc2742a5e7f04936e1e1543c720ea2363d35"} err="failed to get container status \"0cabda8eaa1316182eb67ad8e8e3fc2742a5e7f04936e1e1543c720ea2363d35\": rpc error: code = NotFound desc = could not find container \"0cabda8eaa1316182eb67ad8e8e3fc2742a5e7f04936e1e1543c720ea2363d35\": container with ID starting with 0cabda8eaa1316182eb67ad8e8e3fc2742a5e7f04936e1e1543c720ea2363d35 not found: ID does not exist" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.821300 4867 scope.go:117] "RemoveContainer" containerID="14a477b5f8abacdda560860647515fd1269df06f35bc862bd440744b72123dfc" Feb 14 04:34:24 crc kubenswrapper[4867]: E0214 04:34:24.821603 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14a477b5f8abacdda560860647515fd1269df06f35bc862bd440744b72123dfc\": container with ID starting with 14a477b5f8abacdda560860647515fd1269df06f35bc862bd440744b72123dfc not found: ID does not exist" containerID="14a477b5f8abacdda560860647515fd1269df06f35bc862bd440744b72123dfc" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.821632 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14a477b5f8abacdda560860647515fd1269df06f35bc862bd440744b72123dfc"} err="failed to get container status \"14a477b5f8abacdda560860647515fd1269df06f35bc862bd440744b72123dfc\": rpc error: code = NotFound desc = could not find container \"14a477b5f8abacdda560860647515fd1269df06f35bc862bd440744b72123dfc\": container with ID starting with 14a477b5f8abacdda560860647515fd1269df06f35bc862bd440744b72123dfc not found: ID does not exist" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.821653 4867 scope.go:117] "RemoveContainer" containerID="dd7577aaaaab45999752b1f4efb80ed248e3f9a60ebc38d3fa23086bf2d9e0fe" Feb 14 04:34:24 crc kubenswrapper[4867]: E0214 04:34:24.822241 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd7577aaaaab45999752b1f4efb80ed248e3f9a60ebc38d3fa23086bf2d9e0fe\": container with ID starting with dd7577aaaaab45999752b1f4efb80ed248e3f9a60ebc38d3fa23086bf2d9e0fe not found: ID does not exist" containerID="dd7577aaaaab45999752b1f4efb80ed248e3f9a60ebc38d3fa23086bf2d9e0fe" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.822286 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd7577aaaaab45999752b1f4efb80ed248e3f9a60ebc38d3fa23086bf2d9e0fe"} err="failed to get container status \"dd7577aaaaab45999752b1f4efb80ed248e3f9a60ebc38d3fa23086bf2d9e0fe\": rpc error: code = NotFound desc = could not find container \"dd7577aaaaab45999752b1f4efb80ed248e3f9a60ebc38d3fa23086bf2d9e0fe\": container with ID starting with dd7577aaaaab45999752b1f4efb80ed248e3f9a60ebc38d3fa23086bf2d9e0fe not found: ID does not exist" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.822317 4867 scope.go:117] "RemoveContainer" containerID="a0bf448b9af2da9137bfe6fd50f230b11043ba68729b4c2385f16d8c94be6d14" Feb 14 04:34:24 crc kubenswrapper[4867]: E0214 04:34:24.824495 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0bf448b9af2da9137bfe6fd50f230b11043ba68729b4c2385f16d8c94be6d14\": container with ID starting with a0bf448b9af2da9137bfe6fd50f230b11043ba68729b4c2385f16d8c94be6d14 not found: ID does not exist" containerID="a0bf448b9af2da9137bfe6fd50f230b11043ba68729b4c2385f16d8c94be6d14" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.824541 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0bf448b9af2da9137bfe6fd50f230b11043ba68729b4c2385f16d8c94be6d14"} err="failed to get container status \"a0bf448b9af2da9137bfe6fd50f230b11043ba68729b4c2385f16d8c94be6d14\": rpc error: code = NotFound desc = could not find container \"a0bf448b9af2da9137bfe6fd50f230b11043ba68729b4c2385f16d8c94be6d14\": container with ID starting with a0bf448b9af2da9137bfe6fd50f230b11043ba68729b4c2385f16d8c94be6d14 not found: ID does not exist" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.826380 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-config-data\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.826463 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msskx\" (UniqueName: \"kubernetes.io/projected/91a07e13-20f0-41a3-b974-4570ebfdc497-kube-api-access-msskx\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.826528 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-scripts\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.826586 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a07e13-20f0-41a3-b974-4570ebfdc497-run-httpd\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.826640 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.826661 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a07e13-20f0-41a3-b974-4570ebfdc497-log-httpd\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.826679 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.929465 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.929543 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a07e13-20f0-41a3-b974-4570ebfdc497-log-httpd\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.929575 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.929733 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-config-data\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.929793 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msskx\" (UniqueName: \"kubernetes.io/projected/91a07e13-20f0-41a3-b974-4570ebfdc497-kube-api-access-msskx\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.929820 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-scripts\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.929863 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a07e13-20f0-41a3-b974-4570ebfdc497-run-httpd\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.930285 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a07e13-20f0-41a3-b974-4570ebfdc497-log-httpd\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.930299 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a07e13-20f0-41a3-b974-4570ebfdc497-run-httpd\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.934908 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-config-data\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.935116 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.935192 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.936381 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-scripts\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:24 crc kubenswrapper[4867]: I0214 04:34:24.948753 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msskx\" (UniqueName: \"kubernetes.io/projected/91a07e13-20f0-41a3-b974-4570ebfdc497-kube-api-access-msskx\") pod \"ceilometer-0\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " pod="openstack/ceilometer-0" Feb 14 04:34:25 crc kubenswrapper[4867]: I0214 04:34:25.009687 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dca43c59-5d18-4f9d-bb72-49460d8d691f" path="/var/lib/kubelet/pods/dca43c59-5d18-4f9d-bb72-49460d8d691f/volumes" Feb 14 04:34:25 crc kubenswrapper[4867]: I0214 04:34:25.067288 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:34:25 crc kubenswrapper[4867]: I0214 04:34:25.639150 4867 generic.go:334] "Generic (PLEG): container finished" podID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerID="f7c20be58a69fd5c190fa1d934c18d6f79089308881712b0a2523c6851d81171" exitCode=0 Feb 14 04:34:25 crc kubenswrapper[4867]: I0214 04:34:25.639474 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3b8b8297-e7e9-4d4e-9fbf-8aa302601521","Type":"ContainerDied","Data":"f7c20be58a69fd5c190fa1d934c18d6f79089308881712b0a2523c6851d81171"} Feb 14 04:34:25 crc kubenswrapper[4867]: W0214 04:34:25.661309 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91a07e13_20f0_41a3_b974_4570ebfdc497.slice/crio-f206cad755a2b2f3c0b1803ba04c8a34a0ff6af924273028ea31c8d2d6a28332 WatchSource:0}: Error finding container f206cad755a2b2f3c0b1803ba04c8a34a0ff6af924273028ea31c8d2d6a28332: Status 404 returned error can't find the container with id f206cad755a2b2f3c0b1803ba04c8a34a0ff6af924273028ea31c8d2d6a28332 Feb 14 04:34:25 crc kubenswrapper[4867]: I0214 04:34:25.677825 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:26 crc kubenswrapper[4867]: I0214 04:34:26.654877 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a07e13-20f0-41a3-b974-4570ebfdc497","Type":"ContainerStarted","Data":"04371cd2bd6d981eba64b7f4eaeef7200ada8dd86442ec2e8912d6830b76b8d6"} Feb 14 04:34:26 crc kubenswrapper[4867]: I0214 04:34:26.655654 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a07e13-20f0-41a3-b974-4570ebfdc497","Type":"ContainerStarted","Data":"f206cad755a2b2f3c0b1803ba04c8a34a0ff6af924273028ea31c8d2d6a28332"} Feb 14 04:34:27 crc kubenswrapper[4867]: I0214 04:34:27.670272 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a07e13-20f0-41a3-b974-4570ebfdc497","Type":"ContainerStarted","Data":"365f4ce280bcf54eaf77d8f1f86bd38acc51e0b4dcba2956a590949d246f3f7d"} Feb 14 04:34:28 crc kubenswrapper[4867]: I0214 04:34:28.683895 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a07e13-20f0-41a3-b974-4570ebfdc497","Type":"ContainerStarted","Data":"d8f535569c3a2f29a4194d25fe02c25c8862ebdc340d4ee65743f0cf1cd3d4e2"} Feb 14 04:34:30 crc kubenswrapper[4867]: I0214 04:34:30.712168 4867 generic.go:334] "Generic (PLEG): container finished" podID="07a0a67f-28d7-4aa6-872b-a0223c46a9ce" containerID="7d63f285d67f04fff738be38ba2678cb46d4e846ee48b03b6257c8a564337d5d" exitCode=0 Feb 14 04:34:30 crc kubenswrapper[4867]: I0214 04:34:30.712223 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8w8t2" event={"ID":"07a0a67f-28d7-4aa6-872b-a0223c46a9ce","Type":"ContainerDied","Data":"7d63f285d67f04fff738be38ba2678cb46d4e846ee48b03b6257c8a564337d5d"} Feb 14 04:34:30 crc kubenswrapper[4867]: I0214 04:34:30.719496 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a07e13-20f0-41a3-b974-4570ebfdc497","Type":"ContainerStarted","Data":"8b166e03499fcd6a7d3f4d54be9e9dad070c581c83b5e328175a1a07459495b7"} Feb 14 04:34:30 crc kubenswrapper[4867]: I0214 04:34:30.720137 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 04:34:30 crc kubenswrapper[4867]: I0214 04:34:30.781227 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.6832021470000003 podStartE2EDuration="6.781183945s" podCreationTimestamp="2026-02-14 04:34:24 +0000 UTC" firstStartedPulling="2026-02-14 04:34:25.664774394 +0000 UTC m=+1497.745711708" lastFinishedPulling="2026-02-14 04:34:29.762756192 +0000 UTC m=+1501.843693506" observedRunningTime="2026-02-14 04:34:30.769312116 +0000 UTC m=+1502.850249430" watchObservedRunningTime="2026-02-14 04:34:30.781183945 +0000 UTC m=+1502.862121259" Feb 14 04:34:31 crc kubenswrapper[4867]: I0214 04:34:31.083963 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 14 04:34:31 crc kubenswrapper[4867]: I0214 04:34:31.084547 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 14 04:34:31 crc kubenswrapper[4867]: I0214 04:34:31.087584 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 14 04:34:31 crc kubenswrapper[4867]: I0214 04:34:31.088816 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 14 04:34:31 crc kubenswrapper[4867]: I0214 04:34:31.731295 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8w8t2" event={"ID":"07a0a67f-28d7-4aa6-872b-a0223c46a9ce","Type":"ContainerStarted","Data":"b28951ec7a1a0d867c9e70873b61b9ce82ff78d0b694954ee6ad69ca9b10e341"} Feb 14 04:34:31 crc kubenswrapper[4867]: I0214 04:34:31.732427 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 14 04:34:31 crc kubenswrapper[4867]: I0214 04:34:31.734988 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 14 04:34:31 crc kubenswrapper[4867]: I0214 04:34:31.757747 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8w8t2" podStartSLOduration=3.48626663 podStartE2EDuration="12.757714154s" podCreationTimestamp="2026-02-14 04:34:19 +0000 UTC" firstStartedPulling="2026-02-14 04:34:21.85021712 +0000 UTC m=+1493.931154434" lastFinishedPulling="2026-02-14 04:34:31.121664644 +0000 UTC m=+1503.202601958" observedRunningTime="2026-02-14 04:34:31.757673023 +0000 UTC m=+1503.838610347" watchObservedRunningTime="2026-02-14 04:34:31.757714154 +0000 UTC m=+1503.838651468" Feb 14 04:34:31 crc kubenswrapper[4867]: I0214 04:34:31.983794 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc"] Feb 14 04:34:31 crc kubenswrapper[4867]: I0214 04:34:31.989133 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.031601 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc"] Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.146471 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.146548 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-config\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.146651 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.146788 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.146811 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.146846 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnqpt\" (UniqueName: \"kubernetes.io/projected/5971b677-9b43-4667-b205-3926975d03d8-kube-api-access-wnqpt\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.248821 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.248873 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.248914 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnqpt\" (UniqueName: \"kubernetes.io/projected/5971b677-9b43-4667-b205-3926975d03d8-kube-api-access-wnqpt\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.248949 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.248980 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-config\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.249057 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.250378 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.250407 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.250427 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.250891 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.250964 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-config\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.273451 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnqpt\" (UniqueName: \"kubernetes.io/projected/5971b677-9b43-4667-b205-3926975d03d8-kube-api-access-wnqpt\") pod \"dnsmasq-dns-6b7bbf7cf9-5cgsc\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.336407 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.716921 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.727937 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.734024 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 14 04:34:32 crc kubenswrapper[4867]: I0214 04:34:32.820878 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 14 04:34:33 crc kubenswrapper[4867]: I0214 04:34:33.362124 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc"] Feb 14 04:34:33 crc kubenswrapper[4867]: I0214 04:34:33.771663 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" event={"ID":"5971b677-9b43-4667-b205-3926975d03d8","Type":"ContainerStarted","Data":"3c342daaec09db1c73482280fce80173920eec884b7d07687fab104355216038"} Feb 14 04:34:34 crc kubenswrapper[4867]: I0214 04:34:34.782833 4867 generic.go:334] "Generic (PLEG): container finished" podID="5971b677-9b43-4667-b205-3926975d03d8" containerID="6971374dbc010707ba6790cccdbab9a07aa3260bf64fef9946cb0b85383f3d5f" exitCode=0 Feb 14 04:34:34 crc kubenswrapper[4867]: I0214 04:34:34.784659 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" event={"ID":"5971b677-9b43-4667-b205-3926975d03d8","Type":"ContainerDied","Data":"6971374dbc010707ba6790cccdbab9a07aa3260bf64fef9946cb0b85383f3d5f"} Feb 14 04:34:35 crc kubenswrapper[4867]: I0214 04:34:35.027579 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:34:35 crc kubenswrapper[4867]: I0214 04:34:35.028643 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" containerName="nova-api-api" containerID="cri-o://3bf6de6c41ec2894ac7f99d62ff3f51ff5cc922eed2592185cfef6d65b82aff9" gracePeriod=30 Feb 14 04:34:35 crc kubenswrapper[4867]: I0214 04:34:35.028894 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" containerName="nova-api-log" containerID="cri-o://6a17090aa1f1970c7506d253b3e201ba17d075849c21455fa12bc2d248778b84" gracePeriod=30 Feb 14 04:34:35 crc kubenswrapper[4867]: I0214 04:34:35.804737 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" event={"ID":"5971b677-9b43-4667-b205-3926975d03d8","Type":"ContainerStarted","Data":"9ad581f44e041fa43febb489e9c262f745817b519d4f39097aec3abed254cc22"} Feb 14 04:34:35 crc kubenswrapper[4867]: I0214 04:34:35.804912 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:35 crc kubenswrapper[4867]: I0214 04:34:35.808073 4867 generic.go:334] "Generic (PLEG): container finished" podID="f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" containerID="6a17090aa1f1970c7506d253b3e201ba17d075849c21455fa12bc2d248778b84" exitCode=143 Feb 14 04:34:35 crc kubenswrapper[4867]: I0214 04:34:35.808109 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb","Type":"ContainerDied","Data":"6a17090aa1f1970c7506d253b3e201ba17d075849c21455fa12bc2d248778b84"} Feb 14 04:34:35 crc kubenswrapper[4867]: I0214 04:34:35.832362 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" podStartSLOduration=4.832335434 podStartE2EDuration="4.832335434s" podCreationTimestamp="2026-02-14 04:34:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:34:35.824773451 +0000 UTC m=+1507.905710775" watchObservedRunningTime="2026-02-14 04:34:35.832335434 +0000 UTC m=+1507.913272748" Feb 14 04:34:36 crc kubenswrapper[4867]: I0214 04:34:36.705166 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:36 crc kubenswrapper[4867]: I0214 04:34:36.708096 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerName="ceilometer-central-agent" containerID="cri-o://04371cd2bd6d981eba64b7f4eaeef7200ada8dd86442ec2e8912d6830b76b8d6" gracePeriod=30 Feb 14 04:34:36 crc kubenswrapper[4867]: I0214 04:34:36.708372 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerName="proxy-httpd" containerID="cri-o://8b166e03499fcd6a7d3f4d54be9e9dad070c581c83b5e328175a1a07459495b7" gracePeriod=30 Feb 14 04:34:36 crc kubenswrapper[4867]: I0214 04:34:36.708438 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerName="ceilometer-notification-agent" containerID="cri-o://365f4ce280bcf54eaf77d8f1f86bd38acc51e0b4dcba2956a590949d246f3f7d" gracePeriod=30 Feb 14 04:34:36 crc kubenswrapper[4867]: I0214 04:34:36.708523 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerName="sg-core" containerID="cri-o://d8f535569c3a2f29a4194d25fe02c25c8862ebdc340d4ee65743f0cf1cd3d4e2" gracePeriod=30 Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.526864 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.647581 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871276b6-7245-427a-8b55-29dfdfe3695b-combined-ca-bundle\") pod \"871276b6-7245-427a-8b55-29dfdfe3695b\" (UID: \"871276b6-7245-427a-8b55-29dfdfe3695b\") " Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.647761 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnxbd\" (UniqueName: \"kubernetes.io/projected/871276b6-7245-427a-8b55-29dfdfe3695b-kube-api-access-dnxbd\") pod \"871276b6-7245-427a-8b55-29dfdfe3695b\" (UID: \"871276b6-7245-427a-8b55-29dfdfe3695b\") " Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.647803 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/871276b6-7245-427a-8b55-29dfdfe3695b-config-data\") pod \"871276b6-7245-427a-8b55-29dfdfe3695b\" (UID: \"871276b6-7245-427a-8b55-29dfdfe3695b\") " Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.690101 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/871276b6-7245-427a-8b55-29dfdfe3695b-kube-api-access-dnxbd" (OuterVolumeSpecName: "kube-api-access-dnxbd") pod "871276b6-7245-427a-8b55-29dfdfe3695b" (UID: "871276b6-7245-427a-8b55-29dfdfe3695b"). InnerVolumeSpecName "kube-api-access-dnxbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.703353 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/871276b6-7245-427a-8b55-29dfdfe3695b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "871276b6-7245-427a-8b55-29dfdfe3695b" (UID: "871276b6-7245-427a-8b55-29dfdfe3695b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.721696 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/871276b6-7245-427a-8b55-29dfdfe3695b-config-data" (OuterVolumeSpecName: "config-data") pod "871276b6-7245-427a-8b55-29dfdfe3695b" (UID: "871276b6-7245-427a-8b55-29dfdfe3695b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.751481 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/871276b6-7245-427a-8b55-29dfdfe3695b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.751549 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnxbd\" (UniqueName: \"kubernetes.io/projected/871276b6-7245-427a-8b55-29dfdfe3695b-kube-api-access-dnxbd\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.751563 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/871276b6-7245-427a-8b55-29dfdfe3695b-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.834289 4867 generic.go:334] "Generic (PLEG): container finished" podID="871276b6-7245-427a-8b55-29dfdfe3695b" containerID="7a29ef69a79abcd2999f8338c936a26c65e2128a8ad6b8ec14625ac6e941e0d9" exitCode=137 Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.834359 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"871276b6-7245-427a-8b55-29dfdfe3695b","Type":"ContainerDied","Data":"7a29ef69a79abcd2999f8338c936a26c65e2128a8ad6b8ec14625ac6e941e0d9"} Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.834390 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"871276b6-7245-427a-8b55-29dfdfe3695b","Type":"ContainerDied","Data":"7421ae1cc8f7150f6013e7337e1040d9ce9252e306ea9b4407c26605f30d6363"} Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.834408 4867 scope.go:117] "RemoveContainer" containerID="7a29ef69a79abcd2999f8338c936a26c65e2128a8ad6b8ec14625ac6e941e0d9" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.834575 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.847992 4867 generic.go:334] "Generic (PLEG): container finished" podID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerID="8b166e03499fcd6a7d3f4d54be9e9dad070c581c83b5e328175a1a07459495b7" exitCode=0 Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.848034 4867 generic.go:334] "Generic (PLEG): container finished" podID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerID="d8f535569c3a2f29a4194d25fe02c25c8862ebdc340d4ee65743f0cf1cd3d4e2" exitCode=2 Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.848044 4867 generic.go:334] "Generic (PLEG): container finished" podID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerID="365f4ce280bcf54eaf77d8f1f86bd38acc51e0b4dcba2956a590949d246f3f7d" exitCode=0 Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.848053 4867 generic.go:334] "Generic (PLEG): container finished" podID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerID="04371cd2bd6d981eba64b7f4eaeef7200ada8dd86442ec2e8912d6830b76b8d6" exitCode=0 Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.848081 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a07e13-20f0-41a3-b974-4570ebfdc497","Type":"ContainerDied","Data":"8b166e03499fcd6a7d3f4d54be9e9dad070c581c83b5e328175a1a07459495b7"} Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.848119 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a07e13-20f0-41a3-b974-4570ebfdc497","Type":"ContainerDied","Data":"d8f535569c3a2f29a4194d25fe02c25c8862ebdc340d4ee65743f0cf1cd3d4e2"} Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.848135 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a07e13-20f0-41a3-b974-4570ebfdc497","Type":"ContainerDied","Data":"365f4ce280bcf54eaf77d8f1f86bd38acc51e0b4dcba2956a590949d246f3f7d"} Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.848146 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a07e13-20f0-41a3-b974-4570ebfdc497","Type":"ContainerDied","Data":"04371cd2bd6d981eba64b7f4eaeef7200ada8dd86442ec2e8912d6830b76b8d6"} Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.938592 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.944001 4867 scope.go:117] "RemoveContainer" containerID="7a29ef69a79abcd2999f8338c936a26c65e2128a8ad6b8ec14625ac6e941e0d9" Feb 14 04:34:37 crc kubenswrapper[4867]: E0214 04:34:37.946866 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a29ef69a79abcd2999f8338c936a26c65e2128a8ad6b8ec14625ac6e941e0d9\": container with ID starting with 7a29ef69a79abcd2999f8338c936a26c65e2128a8ad6b8ec14625ac6e941e0d9 not found: ID does not exist" containerID="7a29ef69a79abcd2999f8338c936a26c65e2128a8ad6b8ec14625ac6e941e0d9" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.946921 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a29ef69a79abcd2999f8338c936a26c65e2128a8ad6b8ec14625ac6e941e0d9"} err="failed to get container status \"7a29ef69a79abcd2999f8338c936a26c65e2128a8ad6b8ec14625ac6e941e0d9\": rpc error: code = NotFound desc = could not find container \"7a29ef69a79abcd2999f8338c936a26c65e2128a8ad6b8ec14625ac6e941e0d9\": container with ID starting with 7a29ef69a79abcd2999f8338c936a26c65e2128a8ad6b8ec14625ac6e941e0d9 not found: ID does not exist" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.955272 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.975263 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 04:34:37 crc kubenswrapper[4867]: E0214 04:34:37.976522 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="871276b6-7245-427a-8b55-29dfdfe3695b" containerName="nova-cell1-novncproxy-novncproxy" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.976553 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="871276b6-7245-427a-8b55-29dfdfe3695b" containerName="nova-cell1-novncproxy-novncproxy" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.976795 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="871276b6-7245-427a-8b55-29dfdfe3695b" containerName="nova-cell1-novncproxy-novncproxy" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.977827 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.979757 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.980783 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.980937 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 14 04:34:37 crc kubenswrapper[4867]: I0214 04:34:37.991041 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.059726 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e1bf5e4-7b04-4a47-aa41-e547815fc623-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e1bf5e4-7b04-4a47-aa41-e547815fc623\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.060200 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e1bf5e4-7b04-4a47-aa41-e547815fc623-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e1bf5e4-7b04-4a47-aa41-e547815fc623\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.060751 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e1bf5e4-7b04-4a47-aa41-e547815fc623-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e1bf5e4-7b04-4a47-aa41-e547815fc623\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.060817 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxcfc\" (UniqueName: \"kubernetes.io/projected/3e1bf5e4-7b04-4a47-aa41-e547815fc623-kube-api-access-xxcfc\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e1bf5e4-7b04-4a47-aa41-e547815fc623\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.061151 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e1bf5e4-7b04-4a47-aa41-e547815fc623-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e1bf5e4-7b04-4a47-aa41-e547815fc623\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.089710 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.164654 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-scripts\") pod \"91a07e13-20f0-41a3-b974-4570ebfdc497\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.164793 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a07e13-20f0-41a3-b974-4570ebfdc497-run-httpd\") pod \"91a07e13-20f0-41a3-b974-4570ebfdc497\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.165054 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a07e13-20f0-41a3-b974-4570ebfdc497-log-httpd\") pod \"91a07e13-20f0-41a3-b974-4570ebfdc497\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.165179 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-sg-core-conf-yaml\") pod \"91a07e13-20f0-41a3-b974-4570ebfdc497\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.165240 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msskx\" (UniqueName: \"kubernetes.io/projected/91a07e13-20f0-41a3-b974-4570ebfdc497-kube-api-access-msskx\") pod \"91a07e13-20f0-41a3-b974-4570ebfdc497\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.165387 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-config-data\") pod \"91a07e13-20f0-41a3-b974-4570ebfdc497\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.165432 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-combined-ca-bundle\") pod \"91a07e13-20f0-41a3-b974-4570ebfdc497\" (UID: \"91a07e13-20f0-41a3-b974-4570ebfdc497\") " Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.165879 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e1bf5e4-7b04-4a47-aa41-e547815fc623-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e1bf5e4-7b04-4a47-aa41-e547815fc623\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.165998 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e1bf5e4-7b04-4a47-aa41-e547815fc623-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e1bf5e4-7b04-4a47-aa41-e547815fc623\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.166114 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e1bf5e4-7b04-4a47-aa41-e547815fc623-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e1bf5e4-7b04-4a47-aa41-e547815fc623\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.166145 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxcfc\" (UniqueName: \"kubernetes.io/projected/3e1bf5e4-7b04-4a47-aa41-e547815fc623-kube-api-access-xxcfc\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e1bf5e4-7b04-4a47-aa41-e547815fc623\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.165743 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91a07e13-20f0-41a3-b974-4570ebfdc497-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "91a07e13-20f0-41a3-b974-4570ebfdc497" (UID: "91a07e13-20f0-41a3-b974-4570ebfdc497"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.165822 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91a07e13-20f0-41a3-b974-4570ebfdc497-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "91a07e13-20f0-41a3-b974-4570ebfdc497" (UID: "91a07e13-20f0-41a3-b974-4570ebfdc497"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.166257 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e1bf5e4-7b04-4a47-aa41-e547815fc623-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e1bf5e4-7b04-4a47-aa41-e547815fc623\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.166422 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a07e13-20f0-41a3-b974-4570ebfdc497-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.166435 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/91a07e13-20f0-41a3-b974-4570ebfdc497-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.171125 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e1bf5e4-7b04-4a47-aa41-e547815fc623-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e1bf5e4-7b04-4a47-aa41-e547815fc623\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.173123 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e1bf5e4-7b04-4a47-aa41-e547815fc623-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e1bf5e4-7b04-4a47-aa41-e547815fc623\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.175903 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e1bf5e4-7b04-4a47-aa41-e547815fc623-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e1bf5e4-7b04-4a47-aa41-e547815fc623\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.180071 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e1bf5e4-7b04-4a47-aa41-e547815fc623-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e1bf5e4-7b04-4a47-aa41-e547815fc623\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.190749 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-scripts" (OuterVolumeSpecName: "scripts") pod "91a07e13-20f0-41a3-b974-4570ebfdc497" (UID: "91a07e13-20f0-41a3-b974-4570ebfdc497"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.191213 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91a07e13-20f0-41a3-b974-4570ebfdc497-kube-api-access-msskx" (OuterVolumeSpecName: "kube-api-access-msskx") pod "91a07e13-20f0-41a3-b974-4570ebfdc497" (UID: "91a07e13-20f0-41a3-b974-4570ebfdc497"). InnerVolumeSpecName "kube-api-access-msskx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.191422 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxcfc\" (UniqueName: \"kubernetes.io/projected/3e1bf5e4-7b04-4a47-aa41-e547815fc623-kube-api-access-xxcfc\") pod \"nova-cell1-novncproxy-0\" (UID: \"3e1bf5e4-7b04-4a47-aa41-e547815fc623\") " pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.227674 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "91a07e13-20f0-41a3-b974-4570ebfdc497" (UID: "91a07e13-20f0-41a3-b974-4570ebfdc497"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.269326 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.269365 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msskx\" (UniqueName: \"kubernetes.io/projected/91a07e13-20f0-41a3-b974-4570ebfdc497-kube-api-access-msskx\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.269378 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.341959 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "91a07e13-20f0-41a3-b974-4570ebfdc497" (UID: "91a07e13-20f0-41a3-b974-4570ebfdc497"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.372821 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.402328 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.438848 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-config-data" (OuterVolumeSpecName: "config-data") pod "91a07e13-20f0-41a3-b974-4570ebfdc497" (UID: "91a07e13-20f0-41a3-b974-4570ebfdc497"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.475945 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91a07e13-20f0-41a3-b974-4570ebfdc497-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:38 crc kubenswrapper[4867]: E0214 04:34:38.483540 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf365fedd_2e1e_41da_aeed_c2f6cf9de0eb.slice/crio-3bf6de6c41ec2894ac7f99d62ff3f51ff5cc922eed2592185cfef6d65b82aff9.scope\": RecentStats: unable to find data in memory cache]" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.761813 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.822125 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-combined-ca-bundle\") pod \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\" (UID: \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\") " Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.825933 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnlv8\" (UniqueName: \"kubernetes.io/projected/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-kube-api-access-bnlv8\") pod \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\" (UID: \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\") " Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.828379 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-logs\") pod \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\" (UID: \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\") " Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.828551 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-config-data\") pod \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\" (UID: \"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb\") " Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.831477 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-logs" (OuterVolumeSpecName: "logs") pod "f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" (UID: "f365fedd-2e1e-41da-aeed-c2f6cf9de0eb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.849584 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-kube-api-access-bnlv8" (OuterVolumeSpecName: "kube-api-access-bnlv8") pod "f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" (UID: "f365fedd-2e1e-41da-aeed-c2f6cf9de0eb"). InnerVolumeSpecName "kube-api-access-bnlv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.868013 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" (UID: "f365fedd-2e1e-41da-aeed-c2f6cf9de0eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.878903 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-config-data" (OuterVolumeSpecName: "config-data") pod "f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" (UID: "f365fedd-2e1e-41da-aeed-c2f6cf9de0eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.898155 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"91a07e13-20f0-41a3-b974-4570ebfdc497","Type":"ContainerDied","Data":"f206cad755a2b2f3c0b1803ba04c8a34a0ff6af924273028ea31c8d2d6a28332"} Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.898236 4867 scope.go:117] "RemoveContainer" containerID="8b166e03499fcd6a7d3f4d54be9e9dad070c581c83b5e328175a1a07459495b7" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.898277 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.932385 4867 generic.go:334] "Generic (PLEG): container finished" podID="f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" containerID="3bf6de6c41ec2894ac7f99d62ff3f51ff5cc922eed2592185cfef6d65b82aff9" exitCode=0 Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.932656 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb","Type":"ContainerDied","Data":"3bf6de6c41ec2894ac7f99d62ff3f51ff5cc922eed2592185cfef6d65b82aff9"} Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.932684 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f365fedd-2e1e-41da-aeed-c2f6cf9de0eb","Type":"ContainerDied","Data":"8eef4a09d30f75b09b2a4e941b5145891d9e9ba139549a7546f8625ba9359aed"} Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.932760 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.982008 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-logs\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.989360 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.989394 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:38 crc kubenswrapper[4867]: I0214 04:34:38.989409 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnlv8\" (UniqueName: \"kubernetes.io/projected/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb-kube-api-access-bnlv8\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.091292 4867 scope.go:117] "RemoveContainer" containerID="d8f535569c3a2f29a4194d25fe02c25c8862ebdc340d4ee65743f0cf1cd3d4e2" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.119534 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="871276b6-7245-427a-8b55-29dfdfe3695b" path="/var/lib/kubelet/pods/871276b6-7245-427a-8b55-29dfdfe3695b/volumes" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.120917 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.137712 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.160820 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:39 crc kubenswrapper[4867]: E0214 04:34:39.161609 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerName="ceilometer-notification-agent" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.163817 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerName="ceilometer-notification-agent" Feb 14 04:34:39 crc kubenswrapper[4867]: E0214 04:34:39.163884 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerName="ceilometer-central-agent" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.163892 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerName="ceilometer-central-agent" Feb 14 04:34:39 crc kubenswrapper[4867]: E0214 04:34:39.163912 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" containerName="nova-api-api" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.163919 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" containerName="nova-api-api" Feb 14 04:34:39 crc kubenswrapper[4867]: E0214 04:34:39.163937 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" containerName="nova-api-log" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.163945 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" containerName="nova-api-log" Feb 14 04:34:39 crc kubenswrapper[4867]: E0214 04:34:39.163953 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerName="proxy-httpd" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.163959 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerName="proxy-httpd" Feb 14 04:34:39 crc kubenswrapper[4867]: E0214 04:34:39.163984 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerName="sg-core" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.163990 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerName="sg-core" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.164295 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" containerName="nova-api-log" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.164316 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerName="proxy-httpd" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.164328 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerName="ceilometer-central-agent" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.164341 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerName="ceilometer-notification-agent" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.164361 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" containerName="sg-core" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.164371 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" containerName="nova-api-api" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.168317 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.177048 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.177280 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.192825 4867 scope.go:117] "RemoveContainer" containerID="365f4ce280bcf54eaf77d8f1f86bd38acc51e0b4dcba2956a590949d246f3f7d" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.198469 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.217793 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.281686 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.305207 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-config-data\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.305284 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce113f40-e807-4f30-adaf-8053c4ac7b65-log-httpd\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.307221 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-scripts\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.307282 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.307480 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.307583 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.307824 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77lbc\" (UniqueName: \"kubernetes.io/projected/ce113f40-e807-4f30-adaf-8053c4ac7b65-kube-api-access-77lbc\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.307933 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce113f40-e807-4f30-adaf-8053c4ac7b65-run-httpd\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.325278 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.337358 4867 scope.go:117] "RemoveContainer" containerID="04371cd2bd6d981eba64b7f4eaeef7200ada8dd86442ec2e8912d6830b76b8d6" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.349049 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.349181 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.357465 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.357813 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.369277 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.378180 4867 scope.go:117] "RemoveContainer" containerID="3bf6de6c41ec2894ac7f99d62ff3f51ff5cc922eed2592185cfef6d65b82aff9" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.413743 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77lbc\" (UniqueName: \"kubernetes.io/projected/ce113f40-e807-4f30-adaf-8053c4ac7b65-kube-api-access-77lbc\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.413945 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce113f40-e807-4f30-adaf-8053c4ac7b65-run-httpd\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.414066 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-config-data\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.414154 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce113f40-e807-4f30-adaf-8053c4ac7b65-log-httpd\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.414272 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-scripts\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.414746 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.414936 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.418909 4867 scope.go:117] "RemoveContainer" containerID="6a17090aa1f1970c7506d253b3e201ba17d075849c21455fa12bc2d248778b84" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.419767 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce113f40-e807-4f30-adaf-8053c4ac7b65-log-httpd\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.423776 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce113f40-e807-4f30-adaf-8053c4ac7b65-run-httpd\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.437265 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.438058 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.439980 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-scripts\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.447089 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-config-data\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.461186 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77lbc\" (UniqueName: \"kubernetes.io/projected/ce113f40-e807-4f30-adaf-8053c4ac7b65-kube-api-access-77lbc\") pod \"ceilometer-0\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.518148 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-config-data\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.518245 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vt42\" (UniqueName: \"kubernetes.io/projected/850d3d1a-b2c1-4063-bfb3-a796d727ff88-kube-api-access-4vt42\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.518371 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/850d3d1a-b2c1-4063-bfb3-a796d727ff88-logs\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.518453 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-internal-tls-certs\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.518527 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.518566 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-public-tls-certs\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.542901 4867 scope.go:117] "RemoveContainer" containerID="3bf6de6c41ec2894ac7f99d62ff3f51ff5cc922eed2592185cfef6d65b82aff9" Feb 14 04:34:39 crc kubenswrapper[4867]: E0214 04:34:39.546315 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bf6de6c41ec2894ac7f99d62ff3f51ff5cc922eed2592185cfef6d65b82aff9\": container with ID starting with 3bf6de6c41ec2894ac7f99d62ff3f51ff5cc922eed2592185cfef6d65b82aff9 not found: ID does not exist" containerID="3bf6de6c41ec2894ac7f99d62ff3f51ff5cc922eed2592185cfef6d65b82aff9" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.546367 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bf6de6c41ec2894ac7f99d62ff3f51ff5cc922eed2592185cfef6d65b82aff9"} err="failed to get container status \"3bf6de6c41ec2894ac7f99d62ff3f51ff5cc922eed2592185cfef6d65b82aff9\": rpc error: code = NotFound desc = could not find container \"3bf6de6c41ec2894ac7f99d62ff3f51ff5cc922eed2592185cfef6d65b82aff9\": container with ID starting with 3bf6de6c41ec2894ac7f99d62ff3f51ff5cc922eed2592185cfef6d65b82aff9 not found: ID does not exist" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.546399 4867 scope.go:117] "RemoveContainer" containerID="6a17090aa1f1970c7506d253b3e201ba17d075849c21455fa12bc2d248778b84" Feb 14 04:34:39 crc kubenswrapper[4867]: E0214 04:34:39.548789 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a17090aa1f1970c7506d253b3e201ba17d075849c21455fa12bc2d248778b84\": container with ID starting with 6a17090aa1f1970c7506d253b3e201ba17d075849c21455fa12bc2d248778b84 not found: ID does not exist" containerID="6a17090aa1f1970c7506d253b3e201ba17d075849c21455fa12bc2d248778b84" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.550208 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a17090aa1f1970c7506d253b3e201ba17d075849c21455fa12bc2d248778b84"} err="failed to get container status \"6a17090aa1f1970c7506d253b3e201ba17d075849c21455fa12bc2d248778b84\": rpc error: code = NotFound desc = could not find container \"6a17090aa1f1970c7506d253b3e201ba17d075849c21455fa12bc2d248778b84\": container with ID starting with 6a17090aa1f1970c7506d253b3e201ba17d075849c21455fa12bc2d248778b84 not found: ID does not exist" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.621074 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-internal-tls-certs\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.621196 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.621245 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-public-tls-certs\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.621298 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-config-data\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.621353 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vt42\" (UniqueName: \"kubernetes.io/projected/850d3d1a-b2c1-4063-bfb3-a796d727ff88-kube-api-access-4vt42\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.621464 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/850d3d1a-b2c1-4063-bfb3-a796d727ff88-logs\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.622256 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/850d3d1a-b2c1-4063-bfb3-a796d727ff88-logs\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.626317 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-internal-tls-certs\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.629621 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.631116 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-public-tls-certs\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.631366 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-config-data\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.641398 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.648044 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vt42\" (UniqueName: \"kubernetes.io/projected/850d3d1a-b2c1-4063-bfb3-a796d727ff88-kube-api-access-4vt42\") pod \"nova-api-0\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.674851 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.872416 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:34:39 crc kubenswrapper[4867]: I0214 04:34:39.873361 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.016988 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3e1bf5e4-7b04-4a47-aa41-e547815fc623","Type":"ContainerStarted","Data":"27618aec079281309f2f806dff0227f6ec2dda3b07305db0870bc2570992a846"} Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.017601 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3e1bf5e4-7b04-4a47-aa41-e547815fc623","Type":"ContainerStarted","Data":"7a265dffe4bee445426f6de577d81903f5f8f44fc744a7f1a6c93811b1574fb2"} Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.021489 4867 generic.go:334] "Generic (PLEG): container finished" podID="ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf" containerID="bc19b23b550c0ff93b93128b07ead353fc9290a4dbd1f4015fc48de629ff924f" exitCode=137 Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.022235 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf","Type":"ContainerDied","Data":"bc19b23b550c0ff93b93128b07ead353fc9290a4dbd1f4015fc48de629ff924f"} Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.055285 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.055262149 podStartE2EDuration="3.055262149s" podCreationTimestamp="2026-02-14 04:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:34:40.051317353 +0000 UTC m=+1512.132254667" watchObservedRunningTime="2026-02-14 04:34:40.055262149 +0000 UTC m=+1512.136199463" Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.303473 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.327788 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-config-data\") pod \"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf\" (UID: \"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf\") " Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.361328 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.394432 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-config-data" (OuterVolumeSpecName: "config-data") pod "ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf" (UID: "ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.437279 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv467\" (UniqueName: \"kubernetes.io/projected/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-kube-api-access-bv467\") pod \"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf\" (UID: \"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf\") " Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.437611 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-combined-ca-bundle\") pod \"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf\" (UID: \"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf\") " Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.438589 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.445723 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-kube-api-access-bv467" (OuterVolumeSpecName: "kube-api-access-bv467") pod "ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf" (UID: "ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf"). InnerVolumeSpecName "kube-api-access-bv467". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.491206 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf" (UID: "ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.538485 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.544048 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bv467\" (UniqueName: \"kubernetes.io/projected/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-kube-api-access-bv467\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.544091 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:40 crc kubenswrapper[4867]: I0214 04:34:40.657708 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.019708 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8w8t2" podUID="07a0a67f-28d7-4aa6-872b-a0223c46a9ce" containerName="registry-server" probeResult="failure" output=< Feb 14 04:34:41 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 04:34:41 crc kubenswrapper[4867]: > Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.020761 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91a07e13-20f0-41a3-b974-4570ebfdc497" path="/var/lib/kubelet/pods/91a07e13-20f0-41a3-b974-4570ebfdc497/volumes" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.023243 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f365fedd-2e1e-41da-aeed-c2f6cf9de0eb" path="/var/lib/kubelet/pods/f365fedd-2e1e-41da-aeed-c2f6cf9de0eb/volumes" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.063896 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"850d3d1a-b2c1-4063-bfb3-a796d727ff88","Type":"ContainerStarted","Data":"8d77482b563ed9482e4b0ebcbec7eb6c654115cb0d4aec7f4285cdc30ab1c7f4"} Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.063978 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"850d3d1a-b2c1-4063-bfb3-a796d727ff88","Type":"ContainerStarted","Data":"49aade93d2eb64a508755defcd10d3374df2e6e0070641f14c9d09c777382e72"} Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.063990 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"850d3d1a-b2c1-4063-bfb3-a796d727ff88","Type":"ContainerStarted","Data":"23eda3f5de37b914af1120c4a29676bc10a45dd14a87ddd0f0c35695c9bbb5a7"} Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.077283 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf","Type":"ContainerDied","Data":"2f1ec16c434c7fe8c8b2e012785b630337a932a6d095d2d76aaa4e23a79c54fa"} Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.077762 4867 scope.go:117] "RemoveContainer" containerID="bc19b23b550c0ff93b93128b07ead353fc9290a4dbd1f4015fc48de629ff924f" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.077310 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.082336 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce113f40-e807-4f30-adaf-8053c4ac7b65","Type":"ContainerStarted","Data":"6f3a0a4513ba6bef6e4ce1201f78bb96037334e2512744dff6bf6a6b1b3b22b2"} Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.160688 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.160653868 podStartE2EDuration="2.160653868s" podCreationTimestamp="2026-02-14 04:34:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:34:41.09112711 +0000 UTC m=+1513.172064424" watchObservedRunningTime="2026-02-14 04:34:41.160653868 +0000 UTC m=+1513.241591192" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.223020 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.240205 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.291580 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 04:34:41 crc kubenswrapper[4867]: E0214 04:34:41.292450 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf" containerName="nova-scheduler-scheduler" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.292481 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf" containerName="nova-scheduler-scheduler" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.292812 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf" containerName="nova-scheduler-scheduler" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.293902 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.301477 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.323708 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.481051 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09251416-b49f-4e81-9584-8428f1903785-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"09251416-b49f-4e81-9584-8428f1903785\") " pod="openstack/nova-scheduler-0" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.481360 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwwnd\" (UniqueName: \"kubernetes.io/projected/09251416-b49f-4e81-9584-8428f1903785-kube-api-access-gwwnd\") pod \"nova-scheduler-0\" (UID: \"09251416-b49f-4e81-9584-8428f1903785\") " pod="openstack/nova-scheduler-0" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.481637 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09251416-b49f-4e81-9584-8428f1903785-config-data\") pod \"nova-scheduler-0\" (UID: \"09251416-b49f-4e81-9584-8428f1903785\") " pod="openstack/nova-scheduler-0" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.583762 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09251416-b49f-4e81-9584-8428f1903785-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"09251416-b49f-4e81-9584-8428f1903785\") " pod="openstack/nova-scheduler-0" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.583974 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwwnd\" (UniqueName: \"kubernetes.io/projected/09251416-b49f-4e81-9584-8428f1903785-kube-api-access-gwwnd\") pod \"nova-scheduler-0\" (UID: \"09251416-b49f-4e81-9584-8428f1903785\") " pod="openstack/nova-scheduler-0" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.584061 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09251416-b49f-4e81-9584-8428f1903785-config-data\") pod \"nova-scheduler-0\" (UID: \"09251416-b49f-4e81-9584-8428f1903785\") " pod="openstack/nova-scheduler-0" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.590576 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09251416-b49f-4e81-9584-8428f1903785-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"09251416-b49f-4e81-9584-8428f1903785\") " pod="openstack/nova-scheduler-0" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.596810 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09251416-b49f-4e81-9584-8428f1903785-config-data\") pod \"nova-scheduler-0\" (UID: \"09251416-b49f-4e81-9584-8428f1903785\") " pod="openstack/nova-scheduler-0" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.609984 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwwnd\" (UniqueName: \"kubernetes.io/projected/09251416-b49f-4e81-9584-8428f1903785-kube-api-access-gwwnd\") pod \"nova-scheduler-0\" (UID: \"09251416-b49f-4e81-9584-8428f1903785\") " pod="openstack/nova-scheduler-0" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.616334 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 04:34:41 crc kubenswrapper[4867]: I0214 04:34:41.698796 4867 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod2bbf3a42-f012-4bed-a60e-1defcd0b1af9"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod2bbf3a42-f012-4bed-a60e-1defcd0b1af9] : Timed out while waiting for systemd to remove kubepods-besteffort-pod2bbf3a42_f012_4bed_a60e_1defcd0b1af9.slice" Feb 14 04:34:42 crc kubenswrapper[4867]: I0214 04:34:42.115224 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 04:34:42 crc kubenswrapper[4867]: I0214 04:34:42.120704 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce113f40-e807-4f30-adaf-8053c4ac7b65","Type":"ContainerStarted","Data":"2bdf28b1e859bb5d2211947dae2797aa206db181b3539ea0de854f0f3e6d89c6"} Feb 14 04:34:42 crc kubenswrapper[4867]: I0214 04:34:42.339428 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:34:42 crc kubenswrapper[4867]: I0214 04:34:42.418092 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-sf4cl"] Feb 14 04:34:42 crc kubenswrapper[4867]: I0214 04:34:42.418361 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" podUID="6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9" containerName="dnsmasq-dns" containerID="cri-o://34d8986cbe09c27161bbd156dfd8b33968031eafe3d3eb76f1f6a490b717eb6f" gracePeriod=10 Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.016346 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf" path="/var/lib/kubelet/pods/ef0bc6d9-66ae-4a4d-8650-3c0ac27287cf/volumes" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.127331 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.145902 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-ovsdbserver-sb\") pod \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.146006 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-ovsdbserver-nb\") pod \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.146071 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpf7v\" (UniqueName: \"kubernetes.io/projected/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-kube-api-access-qpf7v\") pod \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.146111 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-dns-swift-storage-0\") pod \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.146135 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-config\") pod \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.146198 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-dns-svc\") pod \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\" (UID: \"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9\") " Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.159376 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce113f40-e807-4f30-adaf-8053c4ac7b65","Type":"ContainerStarted","Data":"645d09ab3ab20918409aff17c8b3710b4ffbfa06ad1a509445fe4ca8b7901e2d"} Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.162347 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce113f40-e807-4f30-adaf-8053c4ac7b65","Type":"ContainerStarted","Data":"da8ab728620d5f0651397fa356c829bf5bff0ab2414fec4cf72bb2494ac4d8b1"} Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.164986 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"09251416-b49f-4e81-9584-8428f1903785","Type":"ContainerStarted","Data":"c9de120b6fd1a7517f333b812742eb01b3833d04ea075130057de9091383c946"} Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.165050 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"09251416-b49f-4e81-9584-8428f1903785","Type":"ContainerStarted","Data":"4ee4cff4cc87308f769e3bd724d5abd95ae658a9785bc66a6f75cd2304c98ea1"} Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.175372 4867 generic.go:334] "Generic (PLEG): container finished" podID="6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9" containerID="34d8986cbe09c27161bbd156dfd8b33968031eafe3d3eb76f1f6a490b717eb6f" exitCode=0 Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.175419 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" event={"ID":"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9","Type":"ContainerDied","Data":"34d8986cbe09c27161bbd156dfd8b33968031eafe3d3eb76f1f6a490b717eb6f"} Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.175450 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" event={"ID":"6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9","Type":"ContainerDied","Data":"d75507374634724c8a1ef310952a5ce339f06c748d3d87d74bf982c68a7ee156"} Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.175468 4867 scope.go:117] "RemoveContainer" containerID="34d8986cbe09c27161bbd156dfd8b33968031eafe3d3eb76f1f6a490b717eb6f" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.175604 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-sf4cl" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.199984 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-kube-api-access-qpf7v" (OuterVolumeSpecName: "kube-api-access-qpf7v") pod "6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9" (UID: "6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9"). InnerVolumeSpecName "kube-api-access-qpf7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.245081 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.245061005 podStartE2EDuration="2.245061005s" podCreationTimestamp="2026-02-14 04:34:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:34:43.232524618 +0000 UTC m=+1515.313461942" watchObservedRunningTime="2026-02-14 04:34:43.245061005 +0000 UTC m=+1515.325998319" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.263663 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpf7v\" (UniqueName: \"kubernetes.io/projected/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-kube-api-access-qpf7v\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.291119 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9" (UID: "6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.323939 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9" (UID: "6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.324498 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-config" (OuterVolumeSpecName: "config") pod "6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9" (UID: "6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.324792 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9" (UID: "6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.349954 4867 scope.go:117] "RemoveContainer" containerID="25f3fdaf8d189df27a82c7b6c2f5ffc72a3cc21b6fdff3aa5db60ff88eff4374" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.367473 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.368016 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.368228 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.368335 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.386711 4867 scope.go:117] "RemoveContainer" containerID="34d8986cbe09c27161bbd156dfd8b33968031eafe3d3eb76f1f6a490b717eb6f" Feb 14 04:34:43 crc kubenswrapper[4867]: E0214 04:34:43.387443 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34d8986cbe09c27161bbd156dfd8b33968031eafe3d3eb76f1f6a490b717eb6f\": container with ID starting with 34d8986cbe09c27161bbd156dfd8b33968031eafe3d3eb76f1f6a490b717eb6f not found: ID does not exist" containerID="34d8986cbe09c27161bbd156dfd8b33968031eafe3d3eb76f1f6a490b717eb6f" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.387518 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34d8986cbe09c27161bbd156dfd8b33968031eafe3d3eb76f1f6a490b717eb6f"} err="failed to get container status \"34d8986cbe09c27161bbd156dfd8b33968031eafe3d3eb76f1f6a490b717eb6f\": rpc error: code = NotFound desc = could not find container \"34d8986cbe09c27161bbd156dfd8b33968031eafe3d3eb76f1f6a490b717eb6f\": container with ID starting with 34d8986cbe09c27161bbd156dfd8b33968031eafe3d3eb76f1f6a490b717eb6f not found: ID does not exist" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.387558 4867 scope.go:117] "RemoveContainer" containerID="25f3fdaf8d189df27a82c7b6c2f5ffc72a3cc21b6fdff3aa5db60ff88eff4374" Feb 14 04:34:43 crc kubenswrapper[4867]: E0214 04:34:43.388175 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25f3fdaf8d189df27a82c7b6c2f5ffc72a3cc21b6fdff3aa5db60ff88eff4374\": container with ID starting with 25f3fdaf8d189df27a82c7b6c2f5ffc72a3cc21b6fdff3aa5db60ff88eff4374 not found: ID does not exist" containerID="25f3fdaf8d189df27a82c7b6c2f5ffc72a3cc21b6fdff3aa5db60ff88eff4374" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.388224 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25f3fdaf8d189df27a82c7b6c2f5ffc72a3cc21b6fdff3aa5db60ff88eff4374"} err="failed to get container status \"25f3fdaf8d189df27a82c7b6c2f5ffc72a3cc21b6fdff3aa5db60ff88eff4374\": rpc error: code = NotFound desc = could not find container \"25f3fdaf8d189df27a82c7b6c2f5ffc72a3cc21b6fdff3aa5db60ff88eff4374\": container with ID starting with 25f3fdaf8d189df27a82c7b6c2f5ffc72a3cc21b6fdff3aa5db60ff88eff4374 not found: ID does not exist" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.403549 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9" (UID: "6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.403669 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.471734 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.526317 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-sf4cl"] Feb 14 04:34:43 crc kubenswrapper[4867]: I0214 04:34:43.589220 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-sf4cl"] Feb 14 04:34:45 crc kubenswrapper[4867]: I0214 04:34:45.013918 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9" path="/var/lib/kubelet/pods/6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9/volumes" Feb 14 04:34:45 crc kubenswrapper[4867]: I0214 04:34:45.208183 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce113f40-e807-4f30-adaf-8053c4ac7b65","Type":"ContainerStarted","Data":"8ee377ab9df59755c2608bf160912f4986e5a570c0b163efea645d0bbf2907f0"} Feb 14 04:34:45 crc kubenswrapper[4867]: I0214 04:34:45.208379 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerName="ceilometer-central-agent" containerID="cri-o://2bdf28b1e859bb5d2211947dae2797aa206db181b3539ea0de854f0f3e6d89c6" gracePeriod=30 Feb 14 04:34:45 crc kubenswrapper[4867]: I0214 04:34:45.208783 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 04:34:45 crc kubenswrapper[4867]: I0214 04:34:45.209336 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerName="proxy-httpd" containerID="cri-o://8ee377ab9df59755c2608bf160912f4986e5a570c0b163efea645d0bbf2907f0" gracePeriod=30 Feb 14 04:34:45 crc kubenswrapper[4867]: I0214 04:34:45.209413 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerName="sg-core" containerID="cri-o://645d09ab3ab20918409aff17c8b3710b4ffbfa06ad1a509445fe4ca8b7901e2d" gracePeriod=30 Feb 14 04:34:45 crc kubenswrapper[4867]: I0214 04:34:45.209459 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerName="ceilometer-notification-agent" containerID="cri-o://da8ab728620d5f0651397fa356c829bf5bff0ab2414fec4cf72bb2494ac4d8b1" gracePeriod=30 Feb 14 04:34:45 crc kubenswrapper[4867]: I0214 04:34:45.238297 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.837405242 podStartE2EDuration="6.238277369s" podCreationTimestamp="2026-02-14 04:34:39 +0000 UTC" firstStartedPulling="2026-02-14 04:34:40.533601901 +0000 UTC m=+1512.614539215" lastFinishedPulling="2026-02-14 04:34:43.934474028 +0000 UTC m=+1516.015411342" observedRunningTime="2026-02-14 04:34:45.234684893 +0000 UTC m=+1517.315622207" watchObservedRunningTime="2026-02-14 04:34:45.238277369 +0000 UTC m=+1517.319214683" Feb 14 04:34:46 crc kubenswrapper[4867]: I0214 04:34:46.228690 4867 generic.go:334] "Generic (PLEG): container finished" podID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerID="8ee377ab9df59755c2608bf160912f4986e5a570c0b163efea645d0bbf2907f0" exitCode=0 Feb 14 04:34:46 crc kubenswrapper[4867]: I0214 04:34:46.229258 4867 generic.go:334] "Generic (PLEG): container finished" podID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerID="645d09ab3ab20918409aff17c8b3710b4ffbfa06ad1a509445fe4ca8b7901e2d" exitCode=2 Feb 14 04:34:46 crc kubenswrapper[4867]: I0214 04:34:46.229273 4867 generic.go:334] "Generic (PLEG): container finished" podID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerID="da8ab728620d5f0651397fa356c829bf5bff0ab2414fec4cf72bb2494ac4d8b1" exitCode=0 Feb 14 04:34:46 crc kubenswrapper[4867]: I0214 04:34:46.228790 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce113f40-e807-4f30-adaf-8053c4ac7b65","Type":"ContainerDied","Data":"8ee377ab9df59755c2608bf160912f4986e5a570c0b163efea645d0bbf2907f0"} Feb 14 04:34:46 crc kubenswrapper[4867]: I0214 04:34:46.229333 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce113f40-e807-4f30-adaf-8053c4ac7b65","Type":"ContainerDied","Data":"645d09ab3ab20918409aff17c8b3710b4ffbfa06ad1a509445fe4ca8b7901e2d"} Feb 14 04:34:46 crc kubenswrapper[4867]: I0214 04:34:46.229359 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce113f40-e807-4f30-adaf-8053c4ac7b65","Type":"ContainerDied","Data":"da8ab728620d5f0651397fa356c829bf5bff0ab2414fec4cf72bb2494ac4d8b1"} Feb 14 04:34:46 crc kubenswrapper[4867]: I0214 04:34:46.616848 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.287037 4867 generic.go:334] "Generic (PLEG): container finished" podID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerID="2bdf28b1e859bb5d2211947dae2797aa206db181b3539ea0de854f0f3e6d89c6" exitCode=0 Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.287113 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce113f40-e807-4f30-adaf-8053c4ac7b65","Type":"ContainerDied","Data":"2bdf28b1e859bb5d2211947dae2797aa206db181b3539ea0de854f0f3e6d89c6"} Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.288796 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ce113f40-e807-4f30-adaf-8053c4ac7b65","Type":"ContainerDied","Data":"6f3a0a4513ba6bef6e4ce1201f78bb96037334e2512744dff6bf6a6b1b3b22b2"} Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.288899 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f3a0a4513ba6bef6e4ce1201f78bb96037334e2512744dff6bf6a6b1b3b22b2" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.301358 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.403804 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.425604 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.429319 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-scripts\") pod \"ce113f40-e807-4f30-adaf-8053c4ac7b65\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.429384 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-config-data\") pod \"ce113f40-e807-4f30-adaf-8053c4ac7b65\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.429429 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce113f40-e807-4f30-adaf-8053c4ac7b65-run-httpd\") pod \"ce113f40-e807-4f30-adaf-8053c4ac7b65\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.429493 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-combined-ca-bundle\") pod \"ce113f40-e807-4f30-adaf-8053c4ac7b65\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.429733 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77lbc\" (UniqueName: \"kubernetes.io/projected/ce113f40-e807-4f30-adaf-8053c4ac7b65-kube-api-access-77lbc\") pod \"ce113f40-e807-4f30-adaf-8053c4ac7b65\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.429862 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce113f40-e807-4f30-adaf-8053c4ac7b65-log-httpd\") pod \"ce113f40-e807-4f30-adaf-8053c4ac7b65\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.429910 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-sg-core-conf-yaml\") pod \"ce113f40-e807-4f30-adaf-8053c4ac7b65\" (UID: \"ce113f40-e807-4f30-adaf-8053c4ac7b65\") " Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.430097 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce113f40-e807-4f30-adaf-8053c4ac7b65-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ce113f40-e807-4f30-adaf-8053c4ac7b65" (UID: "ce113f40-e807-4f30-adaf-8053c4ac7b65"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.430364 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce113f40-e807-4f30-adaf-8053c4ac7b65-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ce113f40-e807-4f30-adaf-8053c4ac7b65" (UID: "ce113f40-e807-4f30-adaf-8053c4ac7b65"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.431305 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce113f40-e807-4f30-adaf-8053c4ac7b65-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.431333 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ce113f40-e807-4f30-adaf-8053c4ac7b65-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.437794 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce113f40-e807-4f30-adaf-8053c4ac7b65-kube-api-access-77lbc" (OuterVolumeSpecName: "kube-api-access-77lbc") pod "ce113f40-e807-4f30-adaf-8053c4ac7b65" (UID: "ce113f40-e807-4f30-adaf-8053c4ac7b65"). InnerVolumeSpecName "kube-api-access-77lbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.439956 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-scripts" (OuterVolumeSpecName: "scripts") pod "ce113f40-e807-4f30-adaf-8053c4ac7b65" (UID: "ce113f40-e807-4f30-adaf-8053c4ac7b65"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.483934 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ce113f40-e807-4f30-adaf-8053c4ac7b65" (UID: "ce113f40-e807-4f30-adaf-8053c4ac7b65"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.534187 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.534412 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77lbc\" (UniqueName: \"kubernetes.io/projected/ce113f40-e807-4f30-adaf-8053c4ac7b65-kube-api-access-77lbc\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.534473 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.535621 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce113f40-e807-4f30-adaf-8053c4ac7b65" (UID: "ce113f40-e807-4f30-adaf-8053c4ac7b65"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.558741 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-config-data" (OuterVolumeSpecName: "config-data") pod "ce113f40-e807-4f30-adaf-8053c4ac7b65" (UID: "ce113f40-e807-4f30-adaf-8053c4ac7b65"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.637104 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:48 crc kubenswrapper[4867]: I0214 04:34:48.637137 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce113f40-e807-4f30-adaf-8053c4ac7b65-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.299454 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.353889 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.381320 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.395726 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.411709 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:49 crc kubenswrapper[4867]: E0214 04:34:49.412546 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9" containerName="init" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.412588 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9" containerName="init" Feb 14 04:34:49 crc kubenswrapper[4867]: E0214 04:34:49.412611 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerName="proxy-httpd" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.412618 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerName="proxy-httpd" Feb 14 04:34:49 crc kubenswrapper[4867]: E0214 04:34:49.412670 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9" containerName="dnsmasq-dns" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.412678 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9" containerName="dnsmasq-dns" Feb 14 04:34:49 crc kubenswrapper[4867]: E0214 04:34:49.412702 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerName="ceilometer-central-agent" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.412708 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerName="ceilometer-central-agent" Feb 14 04:34:49 crc kubenswrapper[4867]: E0214 04:34:49.412719 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerName="sg-core" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.412725 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerName="sg-core" Feb 14 04:34:49 crc kubenswrapper[4867]: E0214 04:34:49.412743 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerName="ceilometer-notification-agent" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.412751 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerName="ceilometer-notification-agent" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.412986 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerName="ceilometer-central-agent" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.412998 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerName="proxy-httpd" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.413031 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerName="ceilometer-notification-agent" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.413045 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dbc9cd1-f13a-4b7c-9d2b-0075c2b358c9" containerName="dnsmasq-dns" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.413066 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" containerName="sg-core" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.415290 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.420651 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.423643 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.424338 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.561340 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-k2ls7"] Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.563106 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k2ls7" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.564364 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.564483 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.564584 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td9tc\" (UniqueName: \"kubernetes.io/projected/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-kube-api-access-td9tc\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.564872 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-scripts\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.564911 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-run-httpd\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.564970 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-config-data\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.565283 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.565336 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-log-httpd\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.566993 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.573003 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-k2ls7"] Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.667900 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-scripts\") pod \"nova-cell1-cell-mapping-k2ls7\" (UID: \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\") " pod="openstack/nova-cell1-cell-mapping-k2ls7" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.667977 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.668007 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td9tc\" (UniqueName: \"kubernetes.io/projected/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-kube-api-access-td9tc\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.668416 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-k2ls7\" (UID: \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\") " pod="openstack/nova-cell1-cell-mapping-k2ls7" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.668688 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-scripts\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.668739 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-run-httpd\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.668803 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7j5h\" (UniqueName: \"kubernetes.io/projected/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-kube-api-access-d7j5h\") pod \"nova-cell1-cell-mapping-k2ls7\" (UID: \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\") " pod="openstack/nova-cell1-cell-mapping-k2ls7" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.668863 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-config-data\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.669178 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-log-httpd\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.669207 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-config-data\") pod \"nova-cell1-cell-mapping-k2ls7\" (UID: \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\") " pod="openstack/nova-cell1-cell-mapping-k2ls7" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.669249 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.669946 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-run-httpd\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.670058 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-log-httpd\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.674239 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.674580 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-scripts\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.675030 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.675819 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.675864 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.678695 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-config-data\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.692728 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td9tc\" (UniqueName: \"kubernetes.io/projected/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-kube-api-access-td9tc\") pod \"ceilometer-0\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.754126 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.771540 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-scripts\") pod \"nova-cell1-cell-mapping-k2ls7\" (UID: \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\") " pod="openstack/nova-cell1-cell-mapping-k2ls7" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.771765 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-k2ls7\" (UID: \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\") " pod="openstack/nova-cell1-cell-mapping-k2ls7" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.771830 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7j5h\" (UniqueName: \"kubernetes.io/projected/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-kube-api-access-d7j5h\") pod \"nova-cell1-cell-mapping-k2ls7\" (UID: \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\") " pod="openstack/nova-cell1-cell-mapping-k2ls7" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.771917 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-config-data\") pod \"nova-cell1-cell-mapping-k2ls7\" (UID: \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\") " pod="openstack/nova-cell1-cell-mapping-k2ls7" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.775754 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-config-data\") pod \"nova-cell1-cell-mapping-k2ls7\" (UID: \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\") " pod="openstack/nova-cell1-cell-mapping-k2ls7" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.775940 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-scripts\") pod \"nova-cell1-cell-mapping-k2ls7\" (UID: \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\") " pod="openstack/nova-cell1-cell-mapping-k2ls7" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.778189 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-k2ls7\" (UID: \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\") " pod="openstack/nova-cell1-cell-mapping-k2ls7" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.793692 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7j5h\" (UniqueName: \"kubernetes.io/projected/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-kube-api-access-d7j5h\") pod \"nova-cell1-cell-mapping-k2ls7\" (UID: \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\") " pod="openstack/nova-cell1-cell-mapping-k2ls7" Feb 14 04:34:49 crc kubenswrapper[4867]: I0214 04:34:49.889169 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k2ls7" Feb 14 04:34:50 crc kubenswrapper[4867]: I0214 04:34:50.277161 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:34:50 crc kubenswrapper[4867]: I0214 04:34:50.344693 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e2abd9c-e70a-4c49-99e2-d8f2606d3916","Type":"ContainerStarted","Data":"36752e6e5f2c31ee736f7a9a28d860706f6c2685f55f602f485609bff4a72cd3"} Feb 14 04:34:50 crc kubenswrapper[4867]: I0214 04:34:50.521052 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-k2ls7"] Feb 14 04:34:50 crc kubenswrapper[4867]: W0214 04:34:50.536614 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4be79f3c_fa78_40d2_9ad9_d1dfd965c831.slice/crio-93942f2908369aa48586c169f69ff9c6fce0cd69dd8bdba555432c48fe82f7bb WatchSource:0}: Error finding container 93942f2908369aa48586c169f69ff9c6fce0cd69dd8bdba555432c48fe82f7bb: Status 404 returned error can't find the container with id 93942f2908369aa48586c169f69ff9c6fce0cd69dd8bdba555432c48fe82f7bb Feb 14 04:34:50 crc kubenswrapper[4867]: I0214 04:34:50.746745 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="850d3d1a-b2c1-4063-bfb3-a796d727ff88" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.0:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 04:34:50 crc kubenswrapper[4867]: I0214 04:34:50.747026 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="850d3d1a-b2c1-4063-bfb3-a796d727ff88" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.0:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 04:34:50 crc kubenswrapper[4867]: I0214 04:34:50.944430 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8w8t2" podUID="07a0a67f-28d7-4aa6-872b-a0223c46a9ce" containerName="registry-server" probeResult="failure" output=< Feb 14 04:34:50 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 04:34:50 crc kubenswrapper[4867]: > Feb 14 04:34:51 crc kubenswrapper[4867]: I0214 04:34:51.022347 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce113f40-e807-4f30-adaf-8053c4ac7b65" path="/var/lib/kubelet/pods/ce113f40-e807-4f30-adaf-8053c4ac7b65/volumes" Feb 14 04:34:51 crc kubenswrapper[4867]: I0214 04:34:51.361192 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e2abd9c-e70a-4c49-99e2-d8f2606d3916","Type":"ContainerStarted","Data":"cc831c892e8c013abef53560483873aaf79b87e38bc3a6d0d64c21cf9f9314c5"} Feb 14 04:34:51 crc kubenswrapper[4867]: I0214 04:34:51.365519 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k2ls7" event={"ID":"4be79f3c-fa78-40d2-9ad9-d1dfd965c831","Type":"ContainerStarted","Data":"8824aa9f9bf0f294916520c801c31cbd1d85520f64360c54d9e396f8acec8e15"} Feb 14 04:34:51 crc kubenswrapper[4867]: I0214 04:34:51.365580 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k2ls7" event={"ID":"4be79f3c-fa78-40d2-9ad9-d1dfd965c831","Type":"ContainerStarted","Data":"93942f2908369aa48586c169f69ff9c6fce0cd69dd8bdba555432c48fe82f7bb"} Feb 14 04:34:51 crc kubenswrapper[4867]: I0214 04:34:51.391026 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-k2ls7" podStartSLOduration=2.391005575 podStartE2EDuration="2.391005575s" podCreationTimestamp="2026-02-14 04:34:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:34:51.389011722 +0000 UTC m=+1523.469949036" watchObservedRunningTime="2026-02-14 04:34:51.391005575 +0000 UTC m=+1523.471942889" Feb 14 04:34:51 crc kubenswrapper[4867]: I0214 04:34:51.617029 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 14 04:34:51 crc kubenswrapper[4867]: I0214 04:34:51.661705 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 14 04:34:52 crc kubenswrapper[4867]: I0214 04:34:52.380523 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e2abd9c-e70a-4c49-99e2-d8f2606d3916","Type":"ContainerStarted","Data":"a035303162febd05e4c69dbea4b23655bfc8fbf0f1bef5f71200bbb4908c72f6"} Feb 14 04:34:52 crc kubenswrapper[4867]: I0214 04:34:52.432368 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 14 04:34:53 crc kubenswrapper[4867]: I0214 04:34:53.396127 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e2abd9c-e70a-4c49-99e2-d8f2606d3916","Type":"ContainerStarted","Data":"1fb8c5a5621f2d512d37075d0d5b21a45a195911425ead599feb944d6a4de9ab"} Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.424754 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e2abd9c-e70a-4c49-99e2-d8f2606d3916","Type":"ContainerStarted","Data":"d3d7a5de7a46e9bf58582679cea6e78b22e33da4c8a17769dcc662cfd68cc950"} Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.425690 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.432722 4867 generic.go:334] "Generic (PLEG): container finished" podID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerID="676b44febd2b1e6f8adc3b36dfacb2ca3ffd9bcd4f9a33888b2b7f58cb54f5e2" exitCode=137 Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.432769 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3b8b8297-e7e9-4d4e-9fbf-8aa302601521","Type":"ContainerDied","Data":"676b44febd2b1e6f8adc3b36dfacb2ca3ffd9bcd4f9a33888b2b7f58cb54f5e2"} Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.466289 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.829625243 podStartE2EDuration="5.466267925s" podCreationTimestamp="2026-02-14 04:34:49 +0000 UTC" firstStartedPulling="2026-02-14 04:34:50.27647095 +0000 UTC m=+1522.357408264" lastFinishedPulling="2026-02-14 04:34:53.913113622 +0000 UTC m=+1525.994050946" observedRunningTime="2026-02-14 04:34:54.45269113 +0000 UTC m=+1526.533628444" watchObservedRunningTime="2026-02-14 04:34:54.466267925 +0000 UTC m=+1526.547205239" Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.603624 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.658707 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-scripts\") pod \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\" (UID: \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\") " Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.658975 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-combined-ca-bundle\") pod \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\" (UID: \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\") " Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.659016 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv5cq\" (UniqueName: \"kubernetes.io/projected/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-kube-api-access-gv5cq\") pod \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\" (UID: \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\") " Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.659283 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-config-data\") pod \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\" (UID: \"3b8b8297-e7e9-4d4e-9fbf-8aa302601521\") " Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.712435 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-scripts" (OuterVolumeSpecName: "scripts") pod "3b8b8297-e7e9-4d4e-9fbf-8aa302601521" (UID: "3b8b8297-e7e9-4d4e-9fbf-8aa302601521"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.716254 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-kube-api-access-gv5cq" (OuterVolumeSpecName: "kube-api-access-gv5cq") pod "3b8b8297-e7e9-4d4e-9fbf-8aa302601521" (UID: "3b8b8297-e7e9-4d4e-9fbf-8aa302601521"). InnerVolumeSpecName "kube-api-access-gv5cq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.772807 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.772850 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv5cq\" (UniqueName: \"kubernetes.io/projected/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-kube-api-access-gv5cq\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.919433 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-config-data" (OuterVolumeSpecName: "config-data") pod "3b8b8297-e7e9-4d4e-9fbf-8aa302601521" (UID: "3b8b8297-e7e9-4d4e-9fbf-8aa302601521"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.951251 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b8b8297-e7e9-4d4e-9fbf-8aa302601521" (UID: "3b8b8297-e7e9-4d4e-9fbf-8aa302601521"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.980280 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:54 crc kubenswrapper[4867]: I0214 04:34:54.980703 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8b8297-e7e9-4d4e-9fbf-8aa302601521-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.449760 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.450725 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3b8b8297-e7e9-4d4e-9fbf-8aa302601521","Type":"ContainerDied","Data":"873489133de3c353c9f8ca313cc4a323ae602d5913923a1f3148b8aae71c2510"} Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.451434 4867 scope.go:117] "RemoveContainer" containerID="676b44febd2b1e6f8adc3b36dfacb2ca3ffd9bcd4f9a33888b2b7f58cb54f5e2" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.510694 4867 scope.go:117] "RemoveContainer" containerID="f7c20be58a69fd5c190fa1d934c18d6f79089308881712b0a2523c6851d81171" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.557597 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.596676 4867 scope.go:117] "RemoveContainer" containerID="9248cc350ed932fdee6220c9e37ba117089264f71d0581c8a1792aace4facbcb" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.624679 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.640895 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 14 04:34:55 crc kubenswrapper[4867]: E0214 04:34:55.641725 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerName="aodh-evaluator" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.641751 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerName="aodh-evaluator" Feb 14 04:34:55 crc kubenswrapper[4867]: E0214 04:34:55.641809 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerName="aodh-notifier" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.641817 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerName="aodh-notifier" Feb 14 04:34:55 crc kubenswrapper[4867]: E0214 04:34:55.641830 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerName="aodh-api" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.641838 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerName="aodh-api" Feb 14 04:34:55 crc kubenswrapper[4867]: E0214 04:34:55.641862 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerName="aodh-listener" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.641868 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerName="aodh-listener" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.642116 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerName="aodh-api" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.642147 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerName="aodh-notifier" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.642165 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerName="aodh-listener" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.642181 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" containerName="aodh-evaluator" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.644974 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.648681 4867 scope.go:117] "RemoveContainer" containerID="389edd9377562dde5f7fe2a4c07b6137629b507c4f69fc65a4a622c3e66a0b90" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.648950 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.648965 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.652010 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.652244 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-bzvlt" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.652536 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.657737 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.712837 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-public-tls-certs\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.712890 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-combined-ca-bundle\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.712977 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-config-data\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.713008 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m47lq\" (UniqueName: \"kubernetes.io/projected/58861691-18ee-408e-9b79-b12a411e99d0-kube-api-access-m47lq\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.713106 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-internal-tls-certs\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.713139 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-scripts\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.816390 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-public-tls-certs\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.816457 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-combined-ca-bundle\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.816591 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-config-data\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.816640 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m47lq\" (UniqueName: \"kubernetes.io/projected/58861691-18ee-408e-9b79-b12a411e99d0-kube-api-access-m47lq\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.816761 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-internal-tls-certs\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.816804 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-scripts\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.822064 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-scripts\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.822941 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-public-tls-certs\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.832561 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-combined-ca-bundle\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.833604 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-config-data\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.834965 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-internal-tls-certs\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:55 crc kubenswrapper[4867]: I0214 04:34:55.838540 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m47lq\" (UniqueName: \"kubernetes.io/projected/58861691-18ee-408e-9b79-b12a411e99d0-kube-api-access-m47lq\") pod \"aodh-0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " pod="openstack/aodh-0" Feb 14 04:34:56 crc kubenswrapper[4867]: I0214 04:34:56.033348 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 04:34:56 crc kubenswrapper[4867]: I0214 04:34:56.536743 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 14 04:34:57 crc kubenswrapper[4867]: I0214 04:34:57.011134 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b8b8297-e7e9-4d4e-9fbf-8aa302601521" path="/var/lib/kubelet/pods/3b8b8297-e7e9-4d4e-9fbf-8aa302601521/volumes" Feb 14 04:34:57 crc kubenswrapper[4867]: I0214 04:34:57.478145 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"58861691-18ee-408e-9b79-b12a411e99d0","Type":"ContainerStarted","Data":"4f9fbe8278c2f8217fd9d1c65cfa1d016b54bc10a1b47dd522ac53e2da5bac45"} Feb 14 04:34:57 crc kubenswrapper[4867]: I0214 04:34:57.478699 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"58861691-18ee-408e-9b79-b12a411e99d0","Type":"ContainerStarted","Data":"cc6bfc1f8b14bfadc90bd97fe9104d42e32da1b206a8c9f9b7d46cb64815cc9b"} Feb 14 04:34:58 crc kubenswrapper[4867]: I0214 04:34:58.494695 4867 generic.go:334] "Generic (PLEG): container finished" podID="4be79f3c-fa78-40d2-9ad9-d1dfd965c831" containerID="8824aa9f9bf0f294916520c801c31cbd1d85520f64360c54d9e396f8acec8e15" exitCode=0 Feb 14 04:34:58 crc kubenswrapper[4867]: I0214 04:34:58.494790 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k2ls7" event={"ID":"4be79f3c-fa78-40d2-9ad9-d1dfd965c831","Type":"ContainerDied","Data":"8824aa9f9bf0f294916520c801c31cbd1d85520f64360c54d9e396f8acec8e15"} Feb 14 04:34:58 crc kubenswrapper[4867]: I0214 04:34:58.500277 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"58861691-18ee-408e-9b79-b12a411e99d0","Type":"ContainerStarted","Data":"a6c180f71636733ac3331112696898cf83a02e4f76f35724da02b3fc7166a0be"} Feb 14 04:34:59 crc kubenswrapper[4867]: I0214 04:34:59.513434 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"58861691-18ee-408e-9b79-b12a411e99d0","Type":"ContainerStarted","Data":"57c262920dac84f166643430c62b34648c079ac3eb2252d50e804a444b3475ef"} Feb 14 04:34:59 crc kubenswrapper[4867]: I0214 04:34:59.689830 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 14 04:34:59 crc kubenswrapper[4867]: I0214 04:34:59.690818 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 14 04:34:59 crc kubenswrapper[4867]: I0214 04:34:59.693937 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 14 04:34:59 crc kubenswrapper[4867]: I0214 04:34:59.733317 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.082078 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k2ls7" Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.164352 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-scripts\") pod \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\" (UID: \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\") " Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.164497 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-combined-ca-bundle\") pod \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\" (UID: \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\") " Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.164613 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-config-data\") pod \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\" (UID: \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\") " Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.164667 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7j5h\" (UniqueName: \"kubernetes.io/projected/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-kube-api-access-d7j5h\") pod \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\" (UID: \"4be79f3c-fa78-40d2-9ad9-d1dfd965c831\") " Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.171914 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-kube-api-access-d7j5h" (OuterVolumeSpecName: "kube-api-access-d7j5h") pod "4be79f3c-fa78-40d2-9ad9-d1dfd965c831" (UID: "4be79f3c-fa78-40d2-9ad9-d1dfd965c831"). InnerVolumeSpecName "kube-api-access-d7j5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.173896 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-scripts" (OuterVolumeSpecName: "scripts") pod "4be79f3c-fa78-40d2-9ad9-d1dfd965c831" (UID: "4be79f3c-fa78-40d2-9ad9-d1dfd965c831"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.221589 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-config-data" (OuterVolumeSpecName: "config-data") pod "4be79f3c-fa78-40d2-9ad9-d1dfd965c831" (UID: "4be79f3c-fa78-40d2-9ad9-d1dfd965c831"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.237574 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4be79f3c-fa78-40d2-9ad9-d1dfd965c831" (UID: "4be79f3c-fa78-40d2-9ad9-d1dfd965c831"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.268759 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.268801 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.268812 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7j5h\" (UniqueName: \"kubernetes.io/projected/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-kube-api-access-d7j5h\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.268824 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4be79f3c-fa78-40d2-9ad9-d1dfd965c831-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.527123 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-k2ls7" Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.527131 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-k2ls7" event={"ID":"4be79f3c-fa78-40d2-9ad9-d1dfd965c831","Type":"ContainerDied","Data":"93942f2908369aa48586c169f69ff9c6fce0cd69dd8bdba555432c48fe82f7bb"} Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.527215 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93942f2908369aa48586c169f69ff9c6fce0cd69dd8bdba555432c48fe82f7bb" Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.529867 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"58861691-18ee-408e-9b79-b12a411e99d0","Type":"ContainerStarted","Data":"27e1492030b12bf8e17f8ae9468e42331d9cc302f11974a5a0fc14d2d151ad95"} Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.530380 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.540435 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.664595 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.507778315 podStartE2EDuration="5.664565614s" podCreationTimestamp="2026-02-14 04:34:55 +0000 UTC" firstStartedPulling="2026-02-14 04:34:56.545909792 +0000 UTC m=+1528.626847106" lastFinishedPulling="2026-02-14 04:34:59.702697091 +0000 UTC m=+1531.783634405" observedRunningTime="2026-02-14 04:35:00.555306049 +0000 UTC m=+1532.636243363" watchObservedRunningTime="2026-02-14 04:35:00.664565614 +0000 UTC m=+1532.745502928" Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.856734 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.902601 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.902950 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="09251416-b49f-4e81-9584-8428f1903785" containerName="nova-scheduler-scheduler" containerID="cri-o://c9de120b6fd1a7517f333b812742eb01b3833d04ea075130057de9091383c946" gracePeriod=30 Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.914970 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.915234 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="35a6b709-4f80-4abc-a92f-24a43d09a805" containerName="nova-metadata-log" containerID="cri-o://fe2d375b29861eadad2b7db855fe51b64530824fb04ec1810859342237673233" gracePeriod=30 Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.915825 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="35a6b709-4f80-4abc-a92f-24a43d09a805" containerName="nova-metadata-metadata" containerID="cri-o://4f20ac204fec7521d0bfa644dbcfa122f64c1e1b5d03b1c1422d51607f747fbe" gracePeriod=30 Feb 14 04:35:00 crc kubenswrapper[4867]: I0214 04:35:00.947342 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8w8t2" podUID="07a0a67f-28d7-4aa6-872b-a0223c46a9ce" containerName="registry-server" probeResult="failure" output=< Feb 14 04:35:00 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 04:35:00 crc kubenswrapper[4867]: > Feb 14 04:35:01 crc kubenswrapper[4867]: I0214 04:35:01.251223 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:35:01 crc kubenswrapper[4867]: I0214 04:35:01.251550 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:35:01 crc kubenswrapper[4867]: I0214 04:35:01.541806 4867 generic.go:334] "Generic (PLEG): container finished" podID="35a6b709-4f80-4abc-a92f-24a43d09a805" containerID="fe2d375b29861eadad2b7db855fe51b64530824fb04ec1810859342237673233" exitCode=143 Feb 14 04:35:01 crc kubenswrapper[4867]: I0214 04:35:01.541877 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35a6b709-4f80-4abc-a92f-24a43d09a805","Type":"ContainerDied","Data":"fe2d375b29861eadad2b7db855fe51b64530824fb04ec1810859342237673233"} Feb 14 04:35:01 crc kubenswrapper[4867]: E0214 04:35:01.627079 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c9de120b6fd1a7517f333b812742eb01b3833d04ea075130057de9091383c946" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 14 04:35:01 crc kubenswrapper[4867]: E0214 04:35:01.629302 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c9de120b6fd1a7517f333b812742eb01b3833d04ea075130057de9091383c946" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 14 04:35:01 crc kubenswrapper[4867]: E0214 04:35:01.635042 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c9de120b6fd1a7517f333b812742eb01b3833d04ea075130057de9091383c946" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 14 04:35:01 crc kubenswrapper[4867]: E0214 04:35:01.635128 4867 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="09251416-b49f-4e81-9584-8428f1903785" containerName="nova-scheduler-scheduler" Feb 14 04:35:02 crc kubenswrapper[4867]: I0214 04:35:02.551192 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="850d3d1a-b2c1-4063-bfb3-a796d727ff88" containerName="nova-api-log" containerID="cri-o://49aade93d2eb64a508755defcd10d3374df2e6e0070641f14c9d09c777382e72" gracePeriod=30 Feb 14 04:35:02 crc kubenswrapper[4867]: I0214 04:35:02.551246 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="850d3d1a-b2c1-4063-bfb3-a796d727ff88" containerName="nova-api-api" containerID="cri-o://8d77482b563ed9482e4b0ebcbec7eb6c654115cb0d4aec7f4285cdc30ab1c7f4" gracePeriod=30 Feb 14 04:35:03 crc kubenswrapper[4867]: I0214 04:35:03.566462 4867 generic.go:334] "Generic (PLEG): container finished" podID="850d3d1a-b2c1-4063-bfb3-a796d727ff88" containerID="49aade93d2eb64a508755defcd10d3374df2e6e0070641f14c9d09c777382e72" exitCode=143 Feb 14 04:35:03 crc kubenswrapper[4867]: I0214 04:35:03.566979 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"850d3d1a-b2c1-4063-bfb3-a796d727ff88","Type":"ContainerDied","Data":"49aade93d2eb64a508755defcd10d3374df2e6e0070641f14c9d09c777382e72"} Feb 14 04:35:04 crc kubenswrapper[4867]: I0214 04:35:04.316635 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="35a6b709-4f80-4abc-a92f-24a43d09a805" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.248:8775/\": read tcp 10.217.0.2:42546->10.217.0.248:8775: read: connection reset by peer" Feb 14 04:35:04 crc kubenswrapper[4867]: I0214 04:35:04.316638 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="35a6b709-4f80-4abc-a92f-24a43d09a805" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.248:8775/\": read tcp 10.217.0.2:42538->10.217.0.248:8775: read: connection reset by peer" Feb 14 04:35:04 crc kubenswrapper[4867]: I0214 04:35:04.580997 4867 generic.go:334] "Generic (PLEG): container finished" podID="35a6b709-4f80-4abc-a92f-24a43d09a805" containerID="4f20ac204fec7521d0bfa644dbcfa122f64c1e1b5d03b1c1422d51607f747fbe" exitCode=0 Feb 14 04:35:04 crc kubenswrapper[4867]: I0214 04:35:04.582308 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35a6b709-4f80-4abc-a92f-24a43d09a805","Type":"ContainerDied","Data":"4f20ac204fec7521d0bfa644dbcfa122f64c1e1b5d03b1c1422d51607f747fbe"} Feb 14 04:35:04 crc kubenswrapper[4867]: I0214 04:35:04.586826 4867 generic.go:334] "Generic (PLEG): container finished" podID="09251416-b49f-4e81-9584-8428f1903785" containerID="c9de120b6fd1a7517f333b812742eb01b3833d04ea075130057de9091383c946" exitCode=0 Feb 14 04:35:04 crc kubenswrapper[4867]: I0214 04:35:04.586860 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"09251416-b49f-4e81-9584-8428f1903785","Type":"ContainerDied","Data":"c9de120b6fd1a7517f333b812742eb01b3833d04ea075130057de9091383c946"} Feb 14 04:35:04 crc kubenswrapper[4867]: I0214 04:35:04.792713 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 04:35:04 crc kubenswrapper[4867]: I0214 04:35:04.894025 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09251416-b49f-4e81-9584-8428f1903785-config-data\") pod \"09251416-b49f-4e81-9584-8428f1903785\" (UID: \"09251416-b49f-4e81-9584-8428f1903785\") " Feb 14 04:35:04 crc kubenswrapper[4867]: I0214 04:35:04.894456 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09251416-b49f-4e81-9584-8428f1903785-combined-ca-bundle\") pod \"09251416-b49f-4e81-9584-8428f1903785\" (UID: \"09251416-b49f-4e81-9584-8428f1903785\") " Feb 14 04:35:04 crc kubenswrapper[4867]: I0214 04:35:04.894965 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwwnd\" (UniqueName: \"kubernetes.io/projected/09251416-b49f-4e81-9584-8428f1903785-kube-api-access-gwwnd\") pod \"09251416-b49f-4e81-9584-8428f1903785\" (UID: \"09251416-b49f-4e81-9584-8428f1903785\") " Feb 14 04:35:04 crc kubenswrapper[4867]: I0214 04:35:04.905670 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09251416-b49f-4e81-9584-8428f1903785-kube-api-access-gwwnd" (OuterVolumeSpecName: "kube-api-access-gwwnd") pod "09251416-b49f-4e81-9584-8428f1903785" (UID: "09251416-b49f-4e81-9584-8428f1903785"). InnerVolumeSpecName "kube-api-access-gwwnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:35:04 crc kubenswrapper[4867]: I0214 04:35:04.969731 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09251416-b49f-4e81-9584-8428f1903785-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09251416-b49f-4e81-9584-8428f1903785" (UID: "09251416-b49f-4e81-9584-8428f1903785"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:04 crc kubenswrapper[4867]: I0214 04:35:04.970950 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09251416-b49f-4e81-9584-8428f1903785-config-data" (OuterVolumeSpecName: "config-data") pod "09251416-b49f-4e81-9584-8428f1903785" (UID: "09251416-b49f-4e81-9584-8428f1903785"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:04 crc kubenswrapper[4867]: I0214 04:35:04.974130 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 04:35:04 crc kubenswrapper[4867]: I0214 04:35:04.999447 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwwnd\" (UniqueName: \"kubernetes.io/projected/09251416-b49f-4e81-9584-8428f1903785-kube-api-access-gwwnd\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.000131 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09251416-b49f-4e81-9584-8428f1903785-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.000605 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09251416-b49f-4e81-9584-8428f1903785-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.115651 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-combined-ca-bundle\") pod \"35a6b709-4f80-4abc-a92f-24a43d09a805\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.115981 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35a6b709-4f80-4abc-a92f-24a43d09a805-logs\") pod \"35a6b709-4f80-4abc-a92f-24a43d09a805\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.116013 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-config-data\") pod \"35a6b709-4f80-4abc-a92f-24a43d09a805\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.116204 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-nova-metadata-tls-certs\") pod \"35a6b709-4f80-4abc-a92f-24a43d09a805\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.116317 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szntz\" (UniqueName: \"kubernetes.io/projected/35a6b709-4f80-4abc-a92f-24a43d09a805-kube-api-access-szntz\") pod \"35a6b709-4f80-4abc-a92f-24a43d09a805\" (UID: \"35a6b709-4f80-4abc-a92f-24a43d09a805\") " Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.117324 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35a6b709-4f80-4abc-a92f-24a43d09a805-logs" (OuterVolumeSpecName: "logs") pod "35a6b709-4f80-4abc-a92f-24a43d09a805" (UID: "35a6b709-4f80-4abc-a92f-24a43d09a805"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.120143 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35a6b709-4f80-4abc-a92f-24a43d09a805-logs\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.120463 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35a6b709-4f80-4abc-a92f-24a43d09a805-kube-api-access-szntz" (OuterVolumeSpecName: "kube-api-access-szntz") pod "35a6b709-4f80-4abc-a92f-24a43d09a805" (UID: "35a6b709-4f80-4abc-a92f-24a43d09a805"). InnerVolumeSpecName "kube-api-access-szntz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.155284 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-config-data" (OuterVolumeSpecName: "config-data") pod "35a6b709-4f80-4abc-a92f-24a43d09a805" (UID: "35a6b709-4f80-4abc-a92f-24a43d09a805"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.197668 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35a6b709-4f80-4abc-a92f-24a43d09a805" (UID: "35a6b709-4f80-4abc-a92f-24a43d09a805"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.213212 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "35a6b709-4f80-4abc-a92f-24a43d09a805" (UID: "35a6b709-4f80-4abc-a92f-24a43d09a805"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.223787 4867 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.223831 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szntz\" (UniqueName: \"kubernetes.io/projected/35a6b709-4f80-4abc-a92f-24a43d09a805-kube-api-access-szntz\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.223842 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.223852 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35a6b709-4f80-4abc-a92f-24a43d09a805-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.600601 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35a6b709-4f80-4abc-a92f-24a43d09a805","Type":"ContainerDied","Data":"e4082bbcd5482c7b8248419bd578fb69fd35b9f6097377273153ca13ce980a74"} Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.600672 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.600692 4867 scope.go:117] "RemoveContainer" containerID="4f20ac204fec7521d0bfa644dbcfa122f64c1e1b5d03b1c1422d51607f747fbe" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.602833 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"09251416-b49f-4e81-9584-8428f1903785","Type":"ContainerDied","Data":"4ee4cff4cc87308f769e3bd724d5abd95ae658a9785bc66a6f75cd2304c98ea1"} Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.602941 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.630942 4867 scope.go:117] "RemoveContainer" containerID="fe2d375b29861eadad2b7db855fe51b64530824fb04ec1810859342237673233" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.666828 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.692452 4867 scope.go:117] "RemoveContainer" containerID="c9de120b6fd1a7517f333b812742eb01b3833d04ea075130057de9091383c946" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.698628 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.724687 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.743656 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 04:35:05 crc kubenswrapper[4867]: E0214 04:35:05.744156 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a6b709-4f80-4abc-a92f-24a43d09a805" containerName="nova-metadata-log" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.744173 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a6b709-4f80-4abc-a92f-24a43d09a805" containerName="nova-metadata-log" Feb 14 04:35:05 crc kubenswrapper[4867]: E0214 04:35:05.744189 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09251416-b49f-4e81-9584-8428f1903785" containerName="nova-scheduler-scheduler" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.744197 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="09251416-b49f-4e81-9584-8428f1903785" containerName="nova-scheduler-scheduler" Feb 14 04:35:05 crc kubenswrapper[4867]: E0214 04:35:05.744215 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a6b709-4f80-4abc-a92f-24a43d09a805" containerName="nova-metadata-metadata" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.744221 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a6b709-4f80-4abc-a92f-24a43d09a805" containerName="nova-metadata-metadata" Feb 14 04:35:05 crc kubenswrapper[4867]: E0214 04:35:05.744233 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4be79f3c-fa78-40d2-9ad9-d1dfd965c831" containerName="nova-manage" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.744241 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4be79f3c-fa78-40d2-9ad9-d1dfd965c831" containerName="nova-manage" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.744483 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4be79f3c-fa78-40d2-9ad9-d1dfd965c831" containerName="nova-manage" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.744530 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="35a6b709-4f80-4abc-a92f-24a43d09a805" containerName="nova-metadata-log" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.744539 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="09251416-b49f-4e81-9584-8428f1903785" containerName="nova-scheduler-scheduler" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.744550 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="35a6b709-4f80-4abc-a92f-24a43d09a805" containerName="nova-metadata-metadata" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.745441 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.756110 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.756376 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.781976 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.784276 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.788791 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.789492 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.807348 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.837498 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.852911 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bb228b6-c3a9-46ac-8c21-a8786c6ac11b-config-data\") pod \"nova-scheduler-0\" (UID: \"7bb228b6-c3a9-46ac-8c21-a8786c6ac11b\") " pod="openstack/nova-scheduler-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.853301 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kj8t\" (UniqueName: \"kubernetes.io/projected/3748198f-49fe-4a76-bd81-4ad518a594e8-kube-api-access-8kj8t\") pod \"nova-metadata-0\" (UID: \"3748198f-49fe-4a76-bd81-4ad518a594e8\") " pod="openstack/nova-metadata-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.853455 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3748198f-49fe-4a76-bd81-4ad518a594e8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3748198f-49fe-4a76-bd81-4ad518a594e8\") " pod="openstack/nova-metadata-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.867710 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb228b6-c3a9-46ac-8c21-a8786c6ac11b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7bb228b6-c3a9-46ac-8c21-a8786c6ac11b\") " pod="openstack/nova-scheduler-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.868099 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-749v5\" (UniqueName: \"kubernetes.io/projected/7bb228b6-c3a9-46ac-8c21-a8786c6ac11b-kube-api-access-749v5\") pod \"nova-scheduler-0\" (UID: \"7bb228b6-c3a9-46ac-8c21-a8786c6ac11b\") " pod="openstack/nova-scheduler-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.868234 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3748198f-49fe-4a76-bd81-4ad518a594e8-config-data\") pod \"nova-metadata-0\" (UID: \"3748198f-49fe-4a76-bd81-4ad518a594e8\") " pod="openstack/nova-metadata-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.868465 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3748198f-49fe-4a76-bd81-4ad518a594e8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3748198f-49fe-4a76-bd81-4ad518a594e8\") " pod="openstack/nova-metadata-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.868681 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3748198f-49fe-4a76-bd81-4ad518a594e8-logs\") pod \"nova-metadata-0\" (UID: \"3748198f-49fe-4a76-bd81-4ad518a594e8\") " pod="openstack/nova-metadata-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.976678 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kj8t\" (UniqueName: \"kubernetes.io/projected/3748198f-49fe-4a76-bd81-4ad518a594e8-kube-api-access-8kj8t\") pod \"nova-metadata-0\" (UID: \"3748198f-49fe-4a76-bd81-4ad518a594e8\") " pod="openstack/nova-metadata-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.977065 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3748198f-49fe-4a76-bd81-4ad518a594e8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3748198f-49fe-4a76-bd81-4ad518a594e8\") " pod="openstack/nova-metadata-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.977171 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb228b6-c3a9-46ac-8c21-a8786c6ac11b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7bb228b6-c3a9-46ac-8c21-a8786c6ac11b\") " pod="openstack/nova-scheduler-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.977258 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-749v5\" (UniqueName: \"kubernetes.io/projected/7bb228b6-c3a9-46ac-8c21-a8786c6ac11b-kube-api-access-749v5\") pod \"nova-scheduler-0\" (UID: \"7bb228b6-c3a9-46ac-8c21-a8786c6ac11b\") " pod="openstack/nova-scheduler-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.977365 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3748198f-49fe-4a76-bd81-4ad518a594e8-config-data\") pod \"nova-metadata-0\" (UID: \"3748198f-49fe-4a76-bd81-4ad518a594e8\") " pod="openstack/nova-metadata-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.977457 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3748198f-49fe-4a76-bd81-4ad518a594e8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3748198f-49fe-4a76-bd81-4ad518a594e8\") " pod="openstack/nova-metadata-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.978582 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3748198f-49fe-4a76-bd81-4ad518a594e8-logs\") pod \"nova-metadata-0\" (UID: \"3748198f-49fe-4a76-bd81-4ad518a594e8\") " pod="openstack/nova-metadata-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.978868 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bb228b6-c3a9-46ac-8c21-a8786c6ac11b-config-data\") pod \"nova-scheduler-0\" (UID: \"7bb228b6-c3a9-46ac-8c21-a8786c6ac11b\") " pod="openstack/nova-scheduler-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.983006 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3748198f-49fe-4a76-bd81-4ad518a594e8-logs\") pod \"nova-metadata-0\" (UID: \"3748198f-49fe-4a76-bd81-4ad518a594e8\") " pod="openstack/nova-metadata-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.989757 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3748198f-49fe-4a76-bd81-4ad518a594e8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3748198f-49fe-4a76-bd81-4ad518a594e8\") " pod="openstack/nova-metadata-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.993123 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3748198f-49fe-4a76-bd81-4ad518a594e8-config-data\") pod \"nova-metadata-0\" (UID: \"3748198f-49fe-4a76-bd81-4ad518a594e8\") " pod="openstack/nova-metadata-0" Feb 14 04:35:05 crc kubenswrapper[4867]: I0214 04:35:05.993488 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3748198f-49fe-4a76-bd81-4ad518a594e8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3748198f-49fe-4a76-bd81-4ad518a594e8\") " pod="openstack/nova-metadata-0" Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.000867 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb228b6-c3a9-46ac-8c21-a8786c6ac11b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7bb228b6-c3a9-46ac-8c21-a8786c6ac11b\") " pod="openstack/nova-scheduler-0" Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.001657 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kj8t\" (UniqueName: \"kubernetes.io/projected/3748198f-49fe-4a76-bd81-4ad518a594e8-kube-api-access-8kj8t\") pod \"nova-metadata-0\" (UID: \"3748198f-49fe-4a76-bd81-4ad518a594e8\") " pod="openstack/nova-metadata-0" Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.016546 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-749v5\" (UniqueName: \"kubernetes.io/projected/7bb228b6-c3a9-46ac-8c21-a8786c6ac11b-kube-api-access-749v5\") pod \"nova-scheduler-0\" (UID: \"7bb228b6-c3a9-46ac-8c21-a8786c6ac11b\") " pod="openstack/nova-scheduler-0" Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.024485 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bb228b6-c3a9-46ac-8c21-a8786c6ac11b-config-data\") pod \"nova-scheduler-0\" (UID: \"7bb228b6-c3a9-46ac-8c21-a8786c6ac11b\") " pod="openstack/nova-scheduler-0" Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.091171 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.167941 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.626947 4867 generic.go:334] "Generic (PLEG): container finished" podID="850d3d1a-b2c1-4063-bfb3-a796d727ff88" containerID="8d77482b563ed9482e4b0ebcbec7eb6c654115cb0d4aec7f4285cdc30ab1c7f4" exitCode=0 Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.627020 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"850d3d1a-b2c1-4063-bfb3-a796d727ff88","Type":"ContainerDied","Data":"8d77482b563ed9482e4b0ebcbec7eb6c654115cb0d4aec7f4285cdc30ab1c7f4"} Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.795200 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 04:35:06 crc kubenswrapper[4867]: W0214 04:35:06.821900 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bb228b6_c3a9_46ac_8c21_a8786c6ac11b.slice/crio-afd951f8aa342236bef14675306fe5f7a7c6823cb9c92f7711be4adf24833636 WatchSource:0}: Error finding container afd951f8aa342236bef14675306fe5f7a7c6823cb9c92f7711be4adf24833636: Status 404 returned error can't find the container with id afd951f8aa342236bef14675306fe5f7a7c6823cb9c92f7711be4adf24833636 Feb 14 04:35:06 crc kubenswrapper[4867]: W0214 04:35:06.824748 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3748198f_49fe_4a76_bd81_4ad518a594e8.slice/crio-78f2e7fd40e9c58cc5f082541b0ca4e08987298f72a838a1396dc5ea37ecdbb4 WatchSource:0}: Error finding container 78f2e7fd40e9c58cc5f082541b0ca4e08987298f72a838a1396dc5ea37ecdbb4: Status 404 returned error can't find the container with id 78f2e7fd40e9c58cc5f082541b0ca4e08987298f72a838a1396dc5ea37ecdbb4 Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.836551 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.878436 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.913139 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-internal-tls-certs\") pod \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.913537 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-combined-ca-bundle\") pod \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.913979 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-config-data\") pod \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.914120 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/850d3d1a-b2c1-4063-bfb3-a796d727ff88-logs\") pod \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.914411 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vt42\" (UniqueName: \"kubernetes.io/projected/850d3d1a-b2c1-4063-bfb3-a796d727ff88-kube-api-access-4vt42\") pod \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.915883 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-public-tls-certs\") pod \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\" (UID: \"850d3d1a-b2c1-4063-bfb3-a796d727ff88\") " Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.918440 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/850d3d1a-b2c1-4063-bfb3-a796d727ff88-logs" (OuterVolumeSpecName: "logs") pod "850d3d1a-b2c1-4063-bfb3-a796d727ff88" (UID: "850d3d1a-b2c1-4063-bfb3-a796d727ff88"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:35:06 crc kubenswrapper[4867]: I0214 04:35:06.922578 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/850d3d1a-b2c1-4063-bfb3-a796d727ff88-kube-api-access-4vt42" (OuterVolumeSpecName: "kube-api-access-4vt42") pod "850d3d1a-b2c1-4063-bfb3-a796d727ff88" (UID: "850d3d1a-b2c1-4063-bfb3-a796d727ff88"). InnerVolumeSpecName "kube-api-access-4vt42". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.019577 4867 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/850d3d1a-b2c1-4063-bfb3-a796d727ff88-logs\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.019615 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vt42\" (UniqueName: \"kubernetes.io/projected/850d3d1a-b2c1-4063-bfb3-a796d727ff88-kube-api-access-4vt42\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.019759 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09251416-b49f-4e81-9584-8428f1903785" path="/var/lib/kubelet/pods/09251416-b49f-4e81-9584-8428f1903785/volumes" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.020383 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35a6b709-4f80-4abc-a92f-24a43d09a805" path="/var/lib/kubelet/pods/35a6b709-4f80-4abc-a92f-24a43d09a805/volumes" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.034442 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-config-data" (OuterVolumeSpecName: "config-data") pod "850d3d1a-b2c1-4063-bfb3-a796d727ff88" (UID: "850d3d1a-b2c1-4063-bfb3-a796d727ff88"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.048716 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "850d3d1a-b2c1-4063-bfb3-a796d727ff88" (UID: "850d3d1a-b2c1-4063-bfb3-a796d727ff88"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.051242 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "850d3d1a-b2c1-4063-bfb3-a796d727ff88" (UID: "850d3d1a-b2c1-4063-bfb3-a796d727ff88"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.083366 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "850d3d1a-b2c1-4063-bfb3-a796d727ff88" (UID: "850d3d1a-b2c1-4063-bfb3-a796d727ff88"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.122329 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.122374 4867 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.122390 4867 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.122401 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/850d3d1a-b2c1-4063-bfb3-a796d727ff88-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.662422 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"850d3d1a-b2c1-4063-bfb3-a796d727ff88","Type":"ContainerDied","Data":"23eda3f5de37b914af1120c4a29676bc10a45dd14a87ddd0f0c35695c9bbb5a7"} Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.663059 4867 scope.go:117] "RemoveContainer" containerID="8d77482b563ed9482e4b0ebcbec7eb6c654115cb0d4aec7f4285cdc30ab1c7f4" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.663472 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.683972 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3748198f-49fe-4a76-bd81-4ad518a594e8","Type":"ContainerStarted","Data":"7ce542212205c747ed57f127d518600ad3fff73ae9a54575e1dc9fbb5b42feb8"} Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.684029 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3748198f-49fe-4a76-bd81-4ad518a594e8","Type":"ContainerStarted","Data":"020ee3e9d366c1b8fef2a939ab9172d0cb013d0129dc85d3831176ee65a1081f"} Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.684043 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3748198f-49fe-4a76-bd81-4ad518a594e8","Type":"ContainerStarted","Data":"78f2e7fd40e9c58cc5f082541b0ca4e08987298f72a838a1396dc5ea37ecdbb4"} Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.690599 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7bb228b6-c3a9-46ac-8c21-a8786c6ac11b","Type":"ContainerStarted","Data":"87a68aefd437700f9b6aa384418fc2aebbf7e5e7b1a2110cc403ad263a060445"} Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.690630 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7bb228b6-c3a9-46ac-8c21-a8786c6ac11b","Type":"ContainerStarted","Data":"afd951f8aa342236bef14675306fe5f7a7c6823cb9c92f7711be4adf24833636"} Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.726939 4867 scope.go:117] "RemoveContainer" containerID="49aade93d2eb64a508755defcd10d3374df2e6e0070641f14c9d09c777382e72" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.738046 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.738019469 podStartE2EDuration="2.738019469s" podCreationTimestamp="2026-02-14 04:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:35:07.713830999 +0000 UTC m=+1539.794768313" watchObservedRunningTime="2026-02-14 04:35:07.738019469 +0000 UTC m=+1539.818956783" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.756904 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.756880836 podStartE2EDuration="2.756880836s" podCreationTimestamp="2026-02-14 04:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:35:07.735719967 +0000 UTC m=+1539.816657281" watchObservedRunningTime="2026-02-14 04:35:07.756880836 +0000 UTC m=+1539.837818150" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.791214 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.827905 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.844964 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 14 04:35:07 crc kubenswrapper[4867]: E0214 04:35:07.845816 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="850d3d1a-b2c1-4063-bfb3-a796d727ff88" containerName="nova-api-api" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.845841 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="850d3d1a-b2c1-4063-bfb3-a796d727ff88" containerName="nova-api-api" Feb 14 04:35:07 crc kubenswrapper[4867]: E0214 04:35:07.845895 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="850d3d1a-b2c1-4063-bfb3-a796d727ff88" containerName="nova-api-log" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.845901 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="850d3d1a-b2c1-4063-bfb3-a796d727ff88" containerName="nova-api-log" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.846142 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="850d3d1a-b2c1-4063-bfb3-a796d727ff88" containerName="nova-api-api" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.846185 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="850d3d1a-b2c1-4063-bfb3-a796d727ff88" containerName="nova-api-log" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.854009 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.858181 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.858657 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.858873 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.859003 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.955357 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/464bbcc9-1810-40bc-8773-bfa3e615b67b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.955728 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/464bbcc9-1810-40bc-8773-bfa3e615b67b-public-tls-certs\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.955818 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/464bbcc9-1810-40bc-8773-bfa3e615b67b-config-data\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.955897 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd66b\" (UniqueName: \"kubernetes.io/projected/464bbcc9-1810-40bc-8773-bfa3e615b67b-kube-api-access-xd66b\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.956107 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/464bbcc9-1810-40bc-8773-bfa3e615b67b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.956196 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/464bbcc9-1810-40bc-8773-bfa3e615b67b-logs\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.989922 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gvlgw"] Feb 14 04:35:07 crc kubenswrapper[4867]: I0214 04:35:07.994371 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.030317 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvlgw"] Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.057978 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-catalog-content\") pod \"redhat-marketplace-gvlgw\" (UID: \"3dbe8df1-aae4-43fe-a7cc-bea6e0124213\") " pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.058092 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9n8l\" (UniqueName: \"kubernetes.io/projected/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-kube-api-access-k9n8l\") pod \"redhat-marketplace-gvlgw\" (UID: \"3dbe8df1-aae4-43fe-a7cc-bea6e0124213\") " pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.058164 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/464bbcc9-1810-40bc-8773-bfa3e615b67b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.058195 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/464bbcc9-1810-40bc-8773-bfa3e615b67b-logs\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.058274 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-utilities\") pod \"redhat-marketplace-gvlgw\" (UID: \"3dbe8df1-aae4-43fe-a7cc-bea6e0124213\") " pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.058321 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/464bbcc9-1810-40bc-8773-bfa3e615b67b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.058395 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/464bbcc9-1810-40bc-8773-bfa3e615b67b-public-tls-certs\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.058434 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/464bbcc9-1810-40bc-8773-bfa3e615b67b-config-data\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.058469 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd66b\" (UniqueName: \"kubernetes.io/projected/464bbcc9-1810-40bc-8773-bfa3e615b67b-kube-api-access-xd66b\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.059632 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/464bbcc9-1810-40bc-8773-bfa3e615b67b-logs\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.065446 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/464bbcc9-1810-40bc-8773-bfa3e615b67b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.068257 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/464bbcc9-1810-40bc-8773-bfa3e615b67b-config-data\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.068802 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/464bbcc9-1810-40bc-8773-bfa3e615b67b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.082462 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/464bbcc9-1810-40bc-8773-bfa3e615b67b-public-tls-certs\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.083708 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd66b\" (UniqueName: \"kubernetes.io/projected/464bbcc9-1810-40bc-8773-bfa3e615b67b-kube-api-access-xd66b\") pod \"nova-api-0\" (UID: \"464bbcc9-1810-40bc-8773-bfa3e615b67b\") " pod="openstack/nova-api-0" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.161169 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9n8l\" (UniqueName: \"kubernetes.io/projected/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-kube-api-access-k9n8l\") pod \"redhat-marketplace-gvlgw\" (UID: \"3dbe8df1-aae4-43fe-a7cc-bea6e0124213\") " pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.161324 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-utilities\") pod \"redhat-marketplace-gvlgw\" (UID: \"3dbe8df1-aae4-43fe-a7cc-bea6e0124213\") " pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.161452 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-catalog-content\") pod \"redhat-marketplace-gvlgw\" (UID: \"3dbe8df1-aae4-43fe-a7cc-bea6e0124213\") " pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.161851 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-utilities\") pod \"redhat-marketplace-gvlgw\" (UID: \"3dbe8df1-aae4-43fe-a7cc-bea6e0124213\") " pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.161985 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-catalog-content\") pod \"redhat-marketplace-gvlgw\" (UID: \"3dbe8df1-aae4-43fe-a7cc-bea6e0124213\") " pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.180888 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.181231 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9n8l\" (UniqueName: \"kubernetes.io/projected/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-kube-api-access-k9n8l\") pod \"redhat-marketplace-gvlgw\" (UID: \"3dbe8df1-aae4-43fe-a7cc-bea6e0124213\") " pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.319347 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:08 crc kubenswrapper[4867]: I0214 04:35:08.824378 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 14 04:35:09 crc kubenswrapper[4867]: I0214 04:35:09.058290 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="850d3d1a-b2c1-4063-bfb3-a796d727ff88" path="/var/lib/kubelet/pods/850d3d1a-b2c1-4063-bfb3-a796d727ff88/volumes" Feb 14 04:35:09 crc kubenswrapper[4867]: I0214 04:35:09.059095 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvlgw"] Feb 14 04:35:09 crc kubenswrapper[4867]: I0214 04:35:09.731416 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"464bbcc9-1810-40bc-8773-bfa3e615b67b","Type":"ContainerStarted","Data":"950fc6945de9051af9e1b0faf98cebbbdb2928cf426dd534741b7b23b9d2cf6c"} Feb 14 04:35:09 crc kubenswrapper[4867]: I0214 04:35:09.732359 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"464bbcc9-1810-40bc-8773-bfa3e615b67b","Type":"ContainerStarted","Data":"1077d52991be2b0d0e83d78d63c066a64dfc4b3b1a4bad89f608cda44ff26c27"} Feb 14 04:35:09 crc kubenswrapper[4867]: I0214 04:35:09.732450 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"464bbcc9-1810-40bc-8773-bfa3e615b67b","Type":"ContainerStarted","Data":"ddf25d5b2fc2c44a19e57f1102b554dfe6a76562b72824cb420dd7acc799fa3f"} Feb 14 04:35:09 crc kubenswrapper[4867]: I0214 04:35:09.735371 4867 generic.go:334] "Generic (PLEG): container finished" podID="3dbe8df1-aae4-43fe-a7cc-bea6e0124213" containerID="29c3d46d3c1a5c9008610223d152565721e790493ef80583497b4a53c2abb102" exitCode=0 Feb 14 04:35:09 crc kubenswrapper[4867]: I0214 04:35:09.735547 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvlgw" event={"ID":"3dbe8df1-aae4-43fe-a7cc-bea6e0124213","Type":"ContainerDied","Data":"29c3d46d3c1a5c9008610223d152565721e790493ef80583497b4a53c2abb102"} Feb 14 04:35:09 crc kubenswrapper[4867]: I0214 04:35:09.735650 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvlgw" event={"ID":"3dbe8df1-aae4-43fe-a7cc-bea6e0124213","Type":"ContainerStarted","Data":"70df0f314d5fb90d90314aa06788a811dc9c80acdc1aa6f7d2bb1ed596e5f7c2"} Feb 14 04:35:09 crc kubenswrapper[4867]: I0214 04:35:09.738085 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 04:35:09 crc kubenswrapper[4867]: I0214 04:35:09.787971 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.787949438 podStartE2EDuration="2.787949438s" podCreationTimestamp="2026-02-14 04:35:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:35:09.769329628 +0000 UTC m=+1541.850266942" watchObservedRunningTime="2026-02-14 04:35:09.787949438 +0000 UTC m=+1541.868886752" Feb 14 04:35:09 crc kubenswrapper[4867]: I0214 04:35:09.928153 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:35:09 crc kubenswrapper[4867]: I0214 04:35:09.980423 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:35:10 crc kubenswrapper[4867]: I0214 04:35:10.753945 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvlgw" event={"ID":"3dbe8df1-aae4-43fe-a7cc-bea6e0124213","Type":"ContainerStarted","Data":"7f493b03493f584a948f58791a4731dee623aef265a565eb57782b6d03c752e4"} Feb 14 04:35:11 crc kubenswrapper[4867]: I0214 04:35:11.092580 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 14 04:35:11 crc kubenswrapper[4867]: I0214 04:35:11.168818 4867 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podef0bc6d9-66ae-4a4d-8650-3c0ac27287cf"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podef0bc6d9-66ae-4a4d-8650-3c0ac27287cf] : Timed out while waiting for systemd to remove kubepods-besteffort-podef0bc6d9_66ae_4a4d_8650_3c0ac27287cf.slice" Feb 14 04:35:11 crc kubenswrapper[4867]: I0214 04:35:11.168840 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 14 04:35:11 crc kubenswrapper[4867]: I0214 04:35:11.168921 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 14 04:35:11 crc kubenswrapper[4867]: I0214 04:35:11.768431 4867 generic.go:334] "Generic (PLEG): container finished" podID="3dbe8df1-aae4-43fe-a7cc-bea6e0124213" containerID="7f493b03493f584a948f58791a4731dee623aef265a565eb57782b6d03c752e4" exitCode=0 Feb 14 04:35:11 crc kubenswrapper[4867]: I0214 04:35:11.768571 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvlgw" event={"ID":"3dbe8df1-aae4-43fe-a7cc-bea6e0124213","Type":"ContainerDied","Data":"7f493b03493f584a948f58791a4731dee623aef265a565eb57782b6d03c752e4"} Feb 14 04:35:12 crc kubenswrapper[4867]: I0214 04:35:12.348778 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8w8t2"] Feb 14 04:35:12 crc kubenswrapper[4867]: I0214 04:35:12.350019 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8w8t2" podUID="07a0a67f-28d7-4aa6-872b-a0223c46a9ce" containerName="registry-server" containerID="cri-o://b28951ec7a1a0d867c9e70873b61b9ce82ff78d0b694954ee6ad69ca9b10e341" gracePeriod=2 Feb 14 04:35:12 crc kubenswrapper[4867]: I0214 04:35:12.889810 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvlgw" event={"ID":"3dbe8df1-aae4-43fe-a7cc-bea6e0124213","Type":"ContainerStarted","Data":"f30d52308341a9296f8b6fd10d906d09999467b46c7027125fb93c9f82b211b6"} Feb 14 04:35:12 crc kubenswrapper[4867]: I0214 04:35:12.894826 4867 generic.go:334] "Generic (PLEG): container finished" podID="07a0a67f-28d7-4aa6-872b-a0223c46a9ce" containerID="b28951ec7a1a0d867c9e70873b61b9ce82ff78d0b694954ee6ad69ca9b10e341" exitCode=0 Feb 14 04:35:12 crc kubenswrapper[4867]: I0214 04:35:12.895016 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8w8t2" event={"ID":"07a0a67f-28d7-4aa6-872b-a0223c46a9ce","Type":"ContainerDied","Data":"b28951ec7a1a0d867c9e70873b61b9ce82ff78d0b694954ee6ad69ca9b10e341"} Feb 14 04:35:12 crc kubenswrapper[4867]: I0214 04:35:12.915353 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gvlgw" podStartSLOduration=3.473100387 podStartE2EDuration="5.915331957s" podCreationTimestamp="2026-02-14 04:35:07 +0000 UTC" firstStartedPulling="2026-02-14 04:35:09.737895753 +0000 UTC m=+1541.818833067" lastFinishedPulling="2026-02-14 04:35:12.180127323 +0000 UTC m=+1544.261064637" observedRunningTime="2026-02-14 04:35:12.914816533 +0000 UTC m=+1544.995753857" watchObservedRunningTime="2026-02-14 04:35:12.915331957 +0000 UTC m=+1544.996269271" Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.110280 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.208786 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-utilities\") pod \"07a0a67f-28d7-4aa6-872b-a0223c46a9ce\" (UID: \"07a0a67f-28d7-4aa6-872b-a0223c46a9ce\") " Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.208840 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz947\" (UniqueName: \"kubernetes.io/projected/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-kube-api-access-kz947\") pod \"07a0a67f-28d7-4aa6-872b-a0223c46a9ce\" (UID: \"07a0a67f-28d7-4aa6-872b-a0223c46a9ce\") " Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.209030 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-catalog-content\") pod \"07a0a67f-28d7-4aa6-872b-a0223c46a9ce\" (UID: \"07a0a67f-28d7-4aa6-872b-a0223c46a9ce\") " Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.209542 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-utilities" (OuterVolumeSpecName: "utilities") pod "07a0a67f-28d7-4aa6-872b-a0223c46a9ce" (UID: "07a0a67f-28d7-4aa6-872b-a0223c46a9ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.215529 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-kube-api-access-kz947" (OuterVolumeSpecName: "kube-api-access-kz947") pod "07a0a67f-28d7-4aa6-872b-a0223c46a9ce" (UID: "07a0a67f-28d7-4aa6-872b-a0223c46a9ce"). InnerVolumeSpecName "kube-api-access-kz947". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.312718 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.312763 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kz947\" (UniqueName: \"kubernetes.io/projected/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-kube-api-access-kz947\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.340133 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07a0a67f-28d7-4aa6-872b-a0223c46a9ce" (UID: "07a0a67f-28d7-4aa6-872b-a0223c46a9ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.414891 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07a0a67f-28d7-4aa6-872b-a0223c46a9ce-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.918209 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8w8t2" event={"ID":"07a0a67f-28d7-4aa6-872b-a0223c46a9ce","Type":"ContainerDied","Data":"fdac00fce6c9717e1c8d18f0be51e81e7fbc0a9225c4838a2047a292e8ab0896"} Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.918479 4867 scope.go:117] "RemoveContainer" containerID="b28951ec7a1a0d867c9e70873b61b9ce82ff78d0b694954ee6ad69ca9b10e341" Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.918239 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8w8t2" Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.947776 4867 scope.go:117] "RemoveContainer" containerID="7d63f285d67f04fff738be38ba2678cb46d4e846ee48b03b6257c8a564337d5d" Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.968477 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8w8t2"] Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.977703 4867 scope.go:117] "RemoveContainer" containerID="bcc64d905c4e5f9d636eab2cf199fd810c50163cc6446c91352e060a5a3e42fd" Feb 14 04:35:13 crc kubenswrapper[4867]: I0214 04:35:13.984052 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8w8t2"] Feb 14 04:35:15 crc kubenswrapper[4867]: I0214 04:35:15.023594 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07a0a67f-28d7-4aa6-872b-a0223c46a9ce" path="/var/lib/kubelet/pods/07a0a67f-28d7-4aa6-872b-a0223c46a9ce/volumes" Feb 14 04:35:16 crc kubenswrapper[4867]: I0214 04:35:16.092799 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 14 04:35:16 crc kubenswrapper[4867]: I0214 04:35:16.126651 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 14 04:35:16 crc kubenswrapper[4867]: I0214 04:35:16.168756 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 14 04:35:16 crc kubenswrapper[4867]: I0214 04:35:16.168810 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 14 04:35:16 crc kubenswrapper[4867]: I0214 04:35:16.993093 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 14 04:35:17 crc kubenswrapper[4867]: I0214 04:35:17.183732 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3748198f-49fe-4a76-bd81-4ad518a594e8" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.6:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 04:35:17 crc kubenswrapper[4867]: I0214 04:35:17.183752 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3748198f-49fe-4a76-bd81-4ad518a594e8" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.6:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 04:35:18 crc kubenswrapper[4867]: I0214 04:35:18.182294 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 04:35:18 crc kubenswrapper[4867]: I0214 04:35:18.182358 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 14 04:35:18 crc kubenswrapper[4867]: I0214 04:35:18.319825 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:18 crc kubenswrapper[4867]: I0214 04:35:18.320173 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:18 crc kubenswrapper[4867]: I0214 04:35:18.377901 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:19 crc kubenswrapper[4867]: I0214 04:35:19.031745 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:19 crc kubenswrapper[4867]: I0214 04:35:19.098888 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvlgw"] Feb 14 04:35:19 crc kubenswrapper[4867]: I0214 04:35:19.194842 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="464bbcc9-1810-40bc-8773-bfa3e615b67b" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.7:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 04:35:19 crc kubenswrapper[4867]: I0214 04:35:19.195030 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="464bbcc9-1810-40bc-8773-bfa3e615b67b" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.7:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 04:35:19 crc kubenswrapper[4867]: I0214 04:35:19.764489 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 14 04:35:20 crc kubenswrapper[4867]: I0214 04:35:20.999187 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gvlgw" podUID="3dbe8df1-aae4-43fe-a7cc-bea6e0124213" containerName="registry-server" containerID="cri-o://f30d52308341a9296f8b6fd10d906d09999467b46c7027125fb93c9f82b211b6" gracePeriod=2 Feb 14 04:35:21 crc kubenswrapper[4867]: I0214 04:35:21.578579 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:21 crc kubenswrapper[4867]: I0214 04:35:21.716134 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9n8l\" (UniqueName: \"kubernetes.io/projected/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-kube-api-access-k9n8l\") pod \"3dbe8df1-aae4-43fe-a7cc-bea6e0124213\" (UID: \"3dbe8df1-aae4-43fe-a7cc-bea6e0124213\") " Feb 14 04:35:21 crc kubenswrapper[4867]: I0214 04:35:21.716247 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-catalog-content\") pod \"3dbe8df1-aae4-43fe-a7cc-bea6e0124213\" (UID: \"3dbe8df1-aae4-43fe-a7cc-bea6e0124213\") " Feb 14 04:35:21 crc kubenswrapper[4867]: I0214 04:35:21.716283 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-utilities\") pod \"3dbe8df1-aae4-43fe-a7cc-bea6e0124213\" (UID: \"3dbe8df1-aae4-43fe-a7cc-bea6e0124213\") " Feb 14 04:35:21 crc kubenswrapper[4867]: I0214 04:35:21.717193 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-utilities" (OuterVolumeSpecName: "utilities") pod "3dbe8df1-aae4-43fe-a7cc-bea6e0124213" (UID: "3dbe8df1-aae4-43fe-a7cc-bea6e0124213"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:35:21 crc kubenswrapper[4867]: I0214 04:35:21.717615 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:21 crc kubenswrapper[4867]: I0214 04:35:21.722265 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-kube-api-access-k9n8l" (OuterVolumeSpecName: "kube-api-access-k9n8l") pod "3dbe8df1-aae4-43fe-a7cc-bea6e0124213" (UID: "3dbe8df1-aae4-43fe-a7cc-bea6e0124213"). InnerVolumeSpecName "kube-api-access-k9n8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:35:21 crc kubenswrapper[4867]: I0214 04:35:21.742462 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3dbe8df1-aae4-43fe-a7cc-bea6e0124213" (UID: "3dbe8df1-aae4-43fe-a7cc-bea6e0124213"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:35:21 crc kubenswrapper[4867]: I0214 04:35:21.821391 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9n8l\" (UniqueName: \"kubernetes.io/projected/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-kube-api-access-k9n8l\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:21 crc kubenswrapper[4867]: I0214 04:35:21.821436 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dbe8df1-aae4-43fe-a7cc-bea6e0124213-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:22 crc kubenswrapper[4867]: I0214 04:35:22.012465 4867 generic.go:334] "Generic (PLEG): container finished" podID="3dbe8df1-aae4-43fe-a7cc-bea6e0124213" containerID="f30d52308341a9296f8b6fd10d906d09999467b46c7027125fb93c9f82b211b6" exitCode=0 Feb 14 04:35:22 crc kubenswrapper[4867]: I0214 04:35:22.012548 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvlgw" event={"ID":"3dbe8df1-aae4-43fe-a7cc-bea6e0124213","Type":"ContainerDied","Data":"f30d52308341a9296f8b6fd10d906d09999467b46c7027125fb93c9f82b211b6"} Feb 14 04:35:22 crc kubenswrapper[4867]: I0214 04:35:22.012581 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gvlgw" event={"ID":"3dbe8df1-aae4-43fe-a7cc-bea6e0124213","Type":"ContainerDied","Data":"70df0f314d5fb90d90314aa06788a811dc9c80acdc1aa6f7d2bb1ed596e5f7c2"} Feb 14 04:35:22 crc kubenswrapper[4867]: I0214 04:35:22.012600 4867 scope.go:117] "RemoveContainer" containerID="f30d52308341a9296f8b6fd10d906d09999467b46c7027125fb93c9f82b211b6" Feb 14 04:35:22 crc kubenswrapper[4867]: I0214 04:35:22.012769 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gvlgw" Feb 14 04:35:22 crc kubenswrapper[4867]: I0214 04:35:22.046892 4867 scope.go:117] "RemoveContainer" containerID="7f493b03493f584a948f58791a4731dee623aef265a565eb57782b6d03c752e4" Feb 14 04:35:22 crc kubenswrapper[4867]: I0214 04:35:22.050388 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvlgw"] Feb 14 04:35:22 crc kubenswrapper[4867]: I0214 04:35:22.064695 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gvlgw"] Feb 14 04:35:22 crc kubenswrapper[4867]: I0214 04:35:22.089057 4867 scope.go:117] "RemoveContainer" containerID="29c3d46d3c1a5c9008610223d152565721e790493ef80583497b4a53c2abb102" Feb 14 04:35:22 crc kubenswrapper[4867]: I0214 04:35:22.156209 4867 scope.go:117] "RemoveContainer" containerID="f30d52308341a9296f8b6fd10d906d09999467b46c7027125fb93c9f82b211b6" Feb 14 04:35:22 crc kubenswrapper[4867]: E0214 04:35:22.157167 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f30d52308341a9296f8b6fd10d906d09999467b46c7027125fb93c9f82b211b6\": container with ID starting with f30d52308341a9296f8b6fd10d906d09999467b46c7027125fb93c9f82b211b6 not found: ID does not exist" containerID="f30d52308341a9296f8b6fd10d906d09999467b46c7027125fb93c9f82b211b6" Feb 14 04:35:22 crc kubenswrapper[4867]: I0214 04:35:22.157222 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f30d52308341a9296f8b6fd10d906d09999467b46c7027125fb93c9f82b211b6"} err="failed to get container status \"f30d52308341a9296f8b6fd10d906d09999467b46c7027125fb93c9f82b211b6\": rpc error: code = NotFound desc = could not find container \"f30d52308341a9296f8b6fd10d906d09999467b46c7027125fb93c9f82b211b6\": container with ID starting with f30d52308341a9296f8b6fd10d906d09999467b46c7027125fb93c9f82b211b6 not found: ID does not exist" Feb 14 04:35:22 crc kubenswrapper[4867]: I0214 04:35:22.157260 4867 scope.go:117] "RemoveContainer" containerID="7f493b03493f584a948f58791a4731dee623aef265a565eb57782b6d03c752e4" Feb 14 04:35:22 crc kubenswrapper[4867]: E0214 04:35:22.157827 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f493b03493f584a948f58791a4731dee623aef265a565eb57782b6d03c752e4\": container with ID starting with 7f493b03493f584a948f58791a4731dee623aef265a565eb57782b6d03c752e4 not found: ID does not exist" containerID="7f493b03493f584a948f58791a4731dee623aef265a565eb57782b6d03c752e4" Feb 14 04:35:22 crc kubenswrapper[4867]: I0214 04:35:22.157857 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f493b03493f584a948f58791a4731dee623aef265a565eb57782b6d03c752e4"} err="failed to get container status \"7f493b03493f584a948f58791a4731dee623aef265a565eb57782b6d03c752e4\": rpc error: code = NotFound desc = could not find container \"7f493b03493f584a948f58791a4731dee623aef265a565eb57782b6d03c752e4\": container with ID starting with 7f493b03493f584a948f58791a4731dee623aef265a565eb57782b6d03c752e4 not found: ID does not exist" Feb 14 04:35:22 crc kubenswrapper[4867]: I0214 04:35:22.157876 4867 scope.go:117] "RemoveContainer" containerID="29c3d46d3c1a5c9008610223d152565721e790493ef80583497b4a53c2abb102" Feb 14 04:35:22 crc kubenswrapper[4867]: E0214 04:35:22.158147 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29c3d46d3c1a5c9008610223d152565721e790493ef80583497b4a53c2abb102\": container with ID starting with 29c3d46d3c1a5c9008610223d152565721e790493ef80583497b4a53c2abb102 not found: ID does not exist" containerID="29c3d46d3c1a5c9008610223d152565721e790493ef80583497b4a53c2abb102" Feb 14 04:35:22 crc kubenswrapper[4867]: I0214 04:35:22.158178 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29c3d46d3c1a5c9008610223d152565721e790493ef80583497b4a53c2abb102"} err="failed to get container status \"29c3d46d3c1a5c9008610223d152565721e790493ef80583497b4a53c2abb102\": rpc error: code = NotFound desc = could not find container \"29c3d46d3c1a5c9008610223d152565721e790493ef80583497b4a53c2abb102\": container with ID starting with 29c3d46d3c1a5c9008610223d152565721e790493ef80583497b4a53c2abb102 not found: ID does not exist" Feb 14 04:35:23 crc kubenswrapper[4867]: I0214 04:35:23.027532 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dbe8df1-aae4-43fe-a7cc-bea6e0124213" path="/var/lib/kubelet/pods/3dbe8df1-aae4-43fe-a7cc-bea6e0124213/volumes" Feb 14 04:35:24 crc kubenswrapper[4867]: I0214 04:35:24.371016 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 04:35:24 crc kubenswrapper[4867]: I0214 04:35:24.372017 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="a78fec22-f395-42fc-a228-8d896580bc95" containerName="kube-state-metrics" containerID="cri-o://c6296689e104eeb9513087c1b6ad0a291438f63926c611686753788a4db4940c" gracePeriod=30 Feb 14 04:35:24 crc kubenswrapper[4867]: I0214 04:35:24.438890 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 04:35:24 crc kubenswrapper[4867]: I0214 04:35:24.439376 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="4e89a71e-e837-4d98-a707-27908a8342bc" containerName="mysqld-exporter" containerID="cri-o://46871adad84ae3334a9c8c1d7590115ccc3e6c56c62e9c431fc9f978e9e97ba6" gracePeriod=30 Feb 14 04:35:24 crc kubenswrapper[4867]: I0214 04:35:24.959294 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.048480 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.056088 4867 generic.go:334] "Generic (PLEG): container finished" podID="a78fec22-f395-42fc-a228-8d896580bc95" containerID="c6296689e104eeb9513087c1b6ad0a291438f63926c611686753788a4db4940c" exitCode=2 Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.056167 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a78fec22-f395-42fc-a228-8d896580bc95","Type":"ContainerDied","Data":"c6296689e104eeb9513087c1b6ad0a291438f63926c611686753788a4db4940c"} Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.056202 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a78fec22-f395-42fc-a228-8d896580bc95","Type":"ContainerDied","Data":"7872a307f41dac436f282982837819d0b6f5a19b6e81efabef32ab85041cfe4d"} Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.056226 4867 scope.go:117] "RemoveContainer" containerID="c6296689e104eeb9513087c1b6ad0a291438f63926c611686753788a4db4940c" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.056353 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.062385 4867 generic.go:334] "Generic (PLEG): container finished" podID="4e89a71e-e837-4d98-a707-27908a8342bc" containerID="46871adad84ae3334a9c8c1d7590115ccc3e6c56c62e9c431fc9f978e9e97ba6" exitCode=2 Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.062432 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"4e89a71e-e837-4d98-a707-27908a8342bc","Type":"ContainerDied","Data":"46871adad84ae3334a9c8c1d7590115ccc3e6c56c62e9c431fc9f978e9e97ba6"} Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.062465 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"4e89a71e-e837-4d98-a707-27908a8342bc","Type":"ContainerDied","Data":"5b4f6da6858b80468a9ce475d2d3c8ccdc38ea567758289aef5a49879e4b28e8"} Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.062552 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.100588 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5zbq\" (UniqueName: \"kubernetes.io/projected/a78fec22-f395-42fc-a228-8d896580bc95-kube-api-access-h5zbq\") pod \"a78fec22-f395-42fc-a228-8d896580bc95\" (UID: \"a78fec22-f395-42fc-a228-8d896580bc95\") " Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.105362 4867 scope.go:117] "RemoveContainer" containerID="c6296689e104eeb9513087c1b6ad0a291438f63926c611686753788a4db4940c" Feb 14 04:35:25 crc kubenswrapper[4867]: E0214 04:35:25.106007 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6296689e104eeb9513087c1b6ad0a291438f63926c611686753788a4db4940c\": container with ID starting with c6296689e104eeb9513087c1b6ad0a291438f63926c611686753788a4db4940c not found: ID does not exist" containerID="c6296689e104eeb9513087c1b6ad0a291438f63926c611686753788a4db4940c" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.106051 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6296689e104eeb9513087c1b6ad0a291438f63926c611686753788a4db4940c"} err="failed to get container status \"c6296689e104eeb9513087c1b6ad0a291438f63926c611686753788a4db4940c\": rpc error: code = NotFound desc = could not find container \"c6296689e104eeb9513087c1b6ad0a291438f63926c611686753788a4db4940c\": container with ID starting with c6296689e104eeb9513087c1b6ad0a291438f63926c611686753788a4db4940c not found: ID does not exist" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.106075 4867 scope.go:117] "RemoveContainer" containerID="46871adad84ae3334a9c8c1d7590115ccc3e6c56c62e9c431fc9f978e9e97ba6" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.109716 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a78fec22-f395-42fc-a228-8d896580bc95-kube-api-access-h5zbq" (OuterVolumeSpecName: "kube-api-access-h5zbq") pod "a78fec22-f395-42fc-a228-8d896580bc95" (UID: "a78fec22-f395-42fc-a228-8d896580bc95"). InnerVolumeSpecName "kube-api-access-h5zbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.144974 4867 scope.go:117] "RemoveContainer" containerID="46871adad84ae3334a9c8c1d7590115ccc3e6c56c62e9c431fc9f978e9e97ba6" Feb 14 04:35:25 crc kubenswrapper[4867]: E0214 04:35:25.145519 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46871adad84ae3334a9c8c1d7590115ccc3e6c56c62e9c431fc9f978e9e97ba6\": container with ID starting with 46871adad84ae3334a9c8c1d7590115ccc3e6c56c62e9c431fc9f978e9e97ba6 not found: ID does not exist" containerID="46871adad84ae3334a9c8c1d7590115ccc3e6c56c62e9c431fc9f978e9e97ba6" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.145564 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46871adad84ae3334a9c8c1d7590115ccc3e6c56c62e9c431fc9f978e9e97ba6"} err="failed to get container status \"46871adad84ae3334a9c8c1d7590115ccc3e6c56c62e9c431fc9f978e9e97ba6\": rpc error: code = NotFound desc = could not find container \"46871adad84ae3334a9c8c1d7590115ccc3e6c56c62e9c431fc9f978e9e97ba6\": container with ID starting with 46871adad84ae3334a9c8c1d7590115ccc3e6c56c62e9c431fc9f978e9e97ba6 not found: ID does not exist" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.202875 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zlkj\" (UniqueName: \"kubernetes.io/projected/4e89a71e-e837-4d98-a707-27908a8342bc-kube-api-access-9zlkj\") pod \"4e89a71e-e837-4d98-a707-27908a8342bc\" (UID: \"4e89a71e-e837-4d98-a707-27908a8342bc\") " Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.203185 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e89a71e-e837-4d98-a707-27908a8342bc-config-data\") pod \"4e89a71e-e837-4d98-a707-27908a8342bc\" (UID: \"4e89a71e-e837-4d98-a707-27908a8342bc\") " Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.203299 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e89a71e-e837-4d98-a707-27908a8342bc-combined-ca-bundle\") pod \"4e89a71e-e837-4d98-a707-27908a8342bc\" (UID: \"4e89a71e-e837-4d98-a707-27908a8342bc\") " Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.203967 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5zbq\" (UniqueName: \"kubernetes.io/projected/a78fec22-f395-42fc-a228-8d896580bc95-kube-api-access-h5zbq\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.206350 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e89a71e-e837-4d98-a707-27908a8342bc-kube-api-access-9zlkj" (OuterVolumeSpecName: "kube-api-access-9zlkj") pod "4e89a71e-e837-4d98-a707-27908a8342bc" (UID: "4e89a71e-e837-4d98-a707-27908a8342bc"). InnerVolumeSpecName "kube-api-access-9zlkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.239963 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e89a71e-e837-4d98-a707-27908a8342bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e89a71e-e837-4d98-a707-27908a8342bc" (UID: "4e89a71e-e837-4d98-a707-27908a8342bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.269536 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e89a71e-e837-4d98-a707-27908a8342bc-config-data" (OuterVolumeSpecName: "config-data") pod "4e89a71e-e837-4d98-a707-27908a8342bc" (UID: "4e89a71e-e837-4d98-a707-27908a8342bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.306070 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e89a71e-e837-4d98-a707-27908a8342bc-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.306259 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e89a71e-e837-4d98-a707-27908a8342bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.306341 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zlkj\" (UniqueName: \"kubernetes.io/projected/4e89a71e-e837-4d98-a707-27908a8342bc-kube-api-access-9zlkj\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.401344 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.416809 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.432098 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.447604 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.466260 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 04:35:25 crc kubenswrapper[4867]: E0214 04:35:25.467048 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dbe8df1-aae4-43fe-a7cc-bea6e0124213" containerName="registry-server" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.467078 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dbe8df1-aae4-43fe-a7cc-bea6e0124213" containerName="registry-server" Feb 14 04:35:25 crc kubenswrapper[4867]: E0214 04:35:25.467098 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07a0a67f-28d7-4aa6-872b-a0223c46a9ce" containerName="extract-utilities" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.467106 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="07a0a67f-28d7-4aa6-872b-a0223c46a9ce" containerName="extract-utilities" Feb 14 04:35:25 crc kubenswrapper[4867]: E0214 04:35:25.467122 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07a0a67f-28d7-4aa6-872b-a0223c46a9ce" containerName="extract-content" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.467131 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="07a0a67f-28d7-4aa6-872b-a0223c46a9ce" containerName="extract-content" Feb 14 04:35:25 crc kubenswrapper[4867]: E0214 04:35:25.467154 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dbe8df1-aae4-43fe-a7cc-bea6e0124213" containerName="extract-content" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.467164 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dbe8df1-aae4-43fe-a7cc-bea6e0124213" containerName="extract-content" Feb 14 04:35:25 crc kubenswrapper[4867]: E0214 04:35:25.467175 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dbe8df1-aae4-43fe-a7cc-bea6e0124213" containerName="extract-utilities" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.467185 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dbe8df1-aae4-43fe-a7cc-bea6e0124213" containerName="extract-utilities" Feb 14 04:35:25 crc kubenswrapper[4867]: E0214 04:35:25.467207 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07a0a67f-28d7-4aa6-872b-a0223c46a9ce" containerName="registry-server" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.467216 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="07a0a67f-28d7-4aa6-872b-a0223c46a9ce" containerName="registry-server" Feb 14 04:35:25 crc kubenswrapper[4867]: E0214 04:35:25.467246 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e89a71e-e837-4d98-a707-27908a8342bc" containerName="mysqld-exporter" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.467254 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e89a71e-e837-4d98-a707-27908a8342bc" containerName="mysqld-exporter" Feb 14 04:35:25 crc kubenswrapper[4867]: E0214 04:35:25.467271 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a78fec22-f395-42fc-a228-8d896580bc95" containerName="kube-state-metrics" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.467278 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a78fec22-f395-42fc-a228-8d896580bc95" containerName="kube-state-metrics" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.467646 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e89a71e-e837-4d98-a707-27908a8342bc" containerName="mysqld-exporter" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.467675 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="a78fec22-f395-42fc-a228-8d896580bc95" containerName="kube-state-metrics" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.467690 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dbe8df1-aae4-43fe-a7cc-bea6e0124213" containerName="registry-server" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.467702 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="07a0a67f-28d7-4aa6-872b-a0223c46a9ce" containerName="registry-server" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.468800 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.470988 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.471219 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.480721 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.483167 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.503074 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.503292 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.513738 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.531436 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.615848 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/89e70483-d3e8-4758-bb61-ae6147dd4f39-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"89e70483-d3e8-4758-bb61-ae6147dd4f39\") " pod="openstack/kube-state-metrics-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.615900 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zw62\" (UniqueName: \"kubernetes.io/projected/89e70483-d3e8-4758-bb61-ae6147dd4f39-kube-api-access-4zw62\") pod \"kube-state-metrics-0\" (UID: \"89e70483-d3e8-4758-bb61-ae6147dd4f39\") " pod="openstack/kube-state-metrics-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.615943 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9139dc7-b868-4f7c-9e7e-10e313ff1e10-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"e9139dc7-b868-4f7c-9e7e-10e313ff1e10\") " pod="openstack/mysqld-exporter-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.615960 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5wm5\" (UniqueName: \"kubernetes.io/projected/e9139dc7-b868-4f7c-9e7e-10e313ff1e10-kube-api-access-t5wm5\") pod \"mysqld-exporter-0\" (UID: \"e9139dc7-b868-4f7c-9e7e-10e313ff1e10\") " pod="openstack/mysqld-exporter-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.615984 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/89e70483-d3e8-4758-bb61-ae6147dd4f39-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"89e70483-d3e8-4758-bb61-ae6147dd4f39\") " pod="openstack/kube-state-metrics-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.616020 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9139dc7-b868-4f7c-9e7e-10e313ff1e10-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"e9139dc7-b868-4f7c-9e7e-10e313ff1e10\") " pod="openstack/mysqld-exporter-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.616047 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9139dc7-b868-4f7c-9e7e-10e313ff1e10-config-data\") pod \"mysqld-exporter-0\" (UID: \"e9139dc7-b868-4f7c-9e7e-10e313ff1e10\") " pod="openstack/mysqld-exporter-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.616086 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e70483-d3e8-4758-bb61-ae6147dd4f39-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"89e70483-d3e8-4758-bb61-ae6147dd4f39\") " pod="openstack/kube-state-metrics-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.718203 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/89e70483-d3e8-4758-bb61-ae6147dd4f39-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"89e70483-d3e8-4758-bb61-ae6147dd4f39\") " pod="openstack/kube-state-metrics-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.718617 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zw62\" (UniqueName: \"kubernetes.io/projected/89e70483-d3e8-4758-bb61-ae6147dd4f39-kube-api-access-4zw62\") pod \"kube-state-metrics-0\" (UID: \"89e70483-d3e8-4758-bb61-ae6147dd4f39\") " pod="openstack/kube-state-metrics-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.718675 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9139dc7-b868-4f7c-9e7e-10e313ff1e10-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"e9139dc7-b868-4f7c-9e7e-10e313ff1e10\") " pod="openstack/mysqld-exporter-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.718690 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5wm5\" (UniqueName: \"kubernetes.io/projected/e9139dc7-b868-4f7c-9e7e-10e313ff1e10-kube-api-access-t5wm5\") pod \"mysqld-exporter-0\" (UID: \"e9139dc7-b868-4f7c-9e7e-10e313ff1e10\") " pod="openstack/mysqld-exporter-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.718721 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/89e70483-d3e8-4758-bb61-ae6147dd4f39-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"89e70483-d3e8-4758-bb61-ae6147dd4f39\") " pod="openstack/kube-state-metrics-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.718767 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9139dc7-b868-4f7c-9e7e-10e313ff1e10-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"e9139dc7-b868-4f7c-9e7e-10e313ff1e10\") " pod="openstack/mysqld-exporter-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.718797 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9139dc7-b868-4f7c-9e7e-10e313ff1e10-config-data\") pod \"mysqld-exporter-0\" (UID: \"e9139dc7-b868-4f7c-9e7e-10e313ff1e10\") " pod="openstack/mysqld-exporter-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.718842 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e70483-d3e8-4758-bb61-ae6147dd4f39-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"89e70483-d3e8-4758-bb61-ae6147dd4f39\") " pod="openstack/kube-state-metrics-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.723223 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/89e70483-d3e8-4758-bb61-ae6147dd4f39-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"89e70483-d3e8-4758-bb61-ae6147dd4f39\") " pod="openstack/kube-state-metrics-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.723229 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89e70483-d3e8-4758-bb61-ae6147dd4f39-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"89e70483-d3e8-4758-bb61-ae6147dd4f39\") " pod="openstack/kube-state-metrics-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.723789 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/89e70483-d3e8-4758-bb61-ae6147dd4f39-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"89e70483-d3e8-4758-bb61-ae6147dd4f39\") " pod="openstack/kube-state-metrics-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.724307 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9139dc7-b868-4f7c-9e7e-10e313ff1e10-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"e9139dc7-b868-4f7c-9e7e-10e313ff1e10\") " pod="openstack/mysqld-exporter-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.731277 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9139dc7-b868-4f7c-9e7e-10e313ff1e10-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"e9139dc7-b868-4f7c-9e7e-10e313ff1e10\") " pod="openstack/mysqld-exporter-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.732229 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9139dc7-b868-4f7c-9e7e-10e313ff1e10-config-data\") pod \"mysqld-exporter-0\" (UID: \"e9139dc7-b868-4f7c-9e7e-10e313ff1e10\") " pod="openstack/mysqld-exporter-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.740083 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5wm5\" (UniqueName: \"kubernetes.io/projected/e9139dc7-b868-4f7c-9e7e-10e313ff1e10-kube-api-access-t5wm5\") pod \"mysqld-exporter-0\" (UID: \"e9139dc7-b868-4f7c-9e7e-10e313ff1e10\") " pod="openstack/mysqld-exporter-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.741797 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zw62\" (UniqueName: \"kubernetes.io/projected/89e70483-d3e8-4758-bb61-ae6147dd4f39-kube-api-access-4zw62\") pod \"kube-state-metrics-0\" (UID: \"89e70483-d3e8-4758-bb61-ae6147dd4f39\") " pod="openstack/kube-state-metrics-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.813930 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 14 04:35:25 crc kubenswrapper[4867]: I0214 04:35:25.886017 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Feb 14 04:35:26 crc kubenswrapper[4867]: I0214 04:35:26.172420 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 14 04:35:26 crc kubenswrapper[4867]: I0214 04:35:26.173713 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 14 04:35:26 crc kubenswrapper[4867]: I0214 04:35:26.176391 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 14 04:35:26 crc kubenswrapper[4867]: I0214 04:35:26.401420 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 14 04:35:26 crc kubenswrapper[4867]: I0214 04:35:26.508386 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Feb 14 04:35:26 crc kubenswrapper[4867]: W0214 04:35:26.510491 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9139dc7_b868_4f7c_9e7e_10e313ff1e10.slice/crio-81cbe0ca053c5f78199ef40639781845b0a9fe159c7091dbb851d99054a200ec WatchSource:0}: Error finding container 81cbe0ca053c5f78199ef40639781845b0a9fe159c7091dbb851d99054a200ec: Status 404 returned error can't find the container with id 81cbe0ca053c5f78199ef40639781845b0a9fe159c7091dbb851d99054a200ec Feb 14 04:35:26 crc kubenswrapper[4867]: I0214 04:35:26.675557 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:35:26 crc kubenswrapper[4867]: I0214 04:35:26.677313 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerName="ceilometer-central-agent" containerID="cri-o://cc831c892e8c013abef53560483873aaf79b87e38bc3a6d0d64c21cf9f9314c5" gracePeriod=30 Feb 14 04:35:26 crc kubenswrapper[4867]: I0214 04:35:26.677588 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerName="ceilometer-notification-agent" containerID="cri-o://a035303162febd05e4c69dbea4b23655bfc8fbf0f1bef5f71200bbb4908c72f6" gracePeriod=30 Feb 14 04:35:26 crc kubenswrapper[4867]: I0214 04:35:26.677576 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerName="sg-core" containerID="cri-o://1fb8c5a5621f2d512d37075d0d5b21a45a195911425ead599feb944d6a4de9ab" gracePeriod=30 Feb 14 04:35:26 crc kubenswrapper[4867]: I0214 04:35:26.677605 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerName="proxy-httpd" containerID="cri-o://d3d7a5de7a46e9bf58582679cea6e78b22e33da4c8a17769dcc662cfd68cc950" gracePeriod=30 Feb 14 04:35:26 crc kubenswrapper[4867]: E0214 04:35:26.764161 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e2abd9c_e70a_4c49_99e2_d8f2606d3916.slice/crio-conmon-1fb8c5a5621f2d512d37075d0d5b21a45a195911425ead599feb944d6a4de9ab.scope\": RecentStats: unable to find data in memory cache]" Feb 14 04:35:27 crc kubenswrapper[4867]: I0214 04:35:27.027413 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e89a71e-e837-4d98-a707-27908a8342bc" path="/var/lib/kubelet/pods/4e89a71e-e837-4d98-a707-27908a8342bc/volumes" Feb 14 04:35:27 crc kubenswrapper[4867]: I0214 04:35:27.029019 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a78fec22-f395-42fc-a228-8d896580bc95" path="/var/lib/kubelet/pods/a78fec22-f395-42fc-a228-8d896580bc95/volumes" Feb 14 04:35:27 crc kubenswrapper[4867]: I0214 04:35:27.130210 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"e9139dc7-b868-4f7c-9e7e-10e313ff1e10","Type":"ContainerStarted","Data":"81cbe0ca053c5f78199ef40639781845b0a9fe159c7091dbb851d99054a200ec"} Feb 14 04:35:27 crc kubenswrapper[4867]: I0214 04:35:27.131673 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"89e70483-d3e8-4758-bb61-ae6147dd4f39","Type":"ContainerStarted","Data":"d1fe91c8c6f53cf2cd3095d370426f8434db2b63771db762197d4b1633174d13"} Feb 14 04:35:27 crc kubenswrapper[4867]: I0214 04:35:27.136195 4867 generic.go:334] "Generic (PLEG): container finished" podID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerID="d3d7a5de7a46e9bf58582679cea6e78b22e33da4c8a17769dcc662cfd68cc950" exitCode=0 Feb 14 04:35:27 crc kubenswrapper[4867]: I0214 04:35:27.136230 4867 generic.go:334] "Generic (PLEG): container finished" podID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerID="1fb8c5a5621f2d512d37075d0d5b21a45a195911425ead599feb944d6a4de9ab" exitCode=2 Feb 14 04:35:27 crc kubenswrapper[4867]: I0214 04:35:27.136468 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e2abd9c-e70a-4c49-99e2-d8f2606d3916","Type":"ContainerDied","Data":"d3d7a5de7a46e9bf58582679cea6e78b22e33da4c8a17769dcc662cfd68cc950"} Feb 14 04:35:27 crc kubenswrapper[4867]: I0214 04:35:27.136496 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e2abd9c-e70a-4c49-99e2-d8f2606d3916","Type":"ContainerDied","Data":"1fb8c5a5621f2d512d37075d0d5b21a45a195911425ead599feb944d6a4de9ab"} Feb 14 04:35:27 crc kubenswrapper[4867]: I0214 04:35:27.144789 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 14 04:35:28 crc kubenswrapper[4867]: I0214 04:35:28.152826 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"89e70483-d3e8-4758-bb61-ae6147dd4f39","Type":"ContainerStarted","Data":"297abf93528c6931e93a622e3695fb5f753d0b19a6467b48c678927e93f9e34b"} Feb 14 04:35:28 crc kubenswrapper[4867]: I0214 04:35:28.153574 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 14 04:35:28 crc kubenswrapper[4867]: I0214 04:35:28.155575 4867 generic.go:334] "Generic (PLEG): container finished" podID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerID="cc831c892e8c013abef53560483873aaf79b87e38bc3a6d0d64c21cf9f9314c5" exitCode=0 Feb 14 04:35:28 crc kubenswrapper[4867]: I0214 04:35:28.155649 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e2abd9c-e70a-4c49-99e2-d8f2606d3916","Type":"ContainerDied","Data":"cc831c892e8c013abef53560483873aaf79b87e38bc3a6d0d64c21cf9f9314c5"} Feb 14 04:35:28 crc kubenswrapper[4867]: I0214 04:35:28.157605 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"e9139dc7-b868-4f7c-9e7e-10e313ff1e10","Type":"ContainerStarted","Data":"90915f128655d36f5a05cb88e69e47360dadef16c0cfc8bedcf47ea687cdc58b"} Feb 14 04:35:28 crc kubenswrapper[4867]: I0214 04:35:28.198321 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.748621739 podStartE2EDuration="3.198295581s" podCreationTimestamp="2026-02-14 04:35:25 +0000 UTC" firstStartedPulling="2026-02-14 04:35:26.400038275 +0000 UTC m=+1558.480975589" lastFinishedPulling="2026-02-14 04:35:26.849712117 +0000 UTC m=+1558.930649431" observedRunningTime="2026-02-14 04:35:28.17590447 +0000 UTC m=+1560.256841784" watchObservedRunningTime="2026-02-14 04:35:28.198295581 +0000 UTC m=+1560.279232905" Feb 14 04:35:28 crc kubenswrapper[4867]: I0214 04:35:28.207370 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.716197418 podStartE2EDuration="3.207352365s" podCreationTimestamp="2026-02-14 04:35:25 +0000 UTC" firstStartedPulling="2026-02-14 04:35:26.51561443 +0000 UTC m=+1558.596551744" lastFinishedPulling="2026-02-14 04:35:27.006769377 +0000 UTC m=+1559.087706691" observedRunningTime="2026-02-14 04:35:28.193616376 +0000 UTC m=+1560.274553690" watchObservedRunningTime="2026-02-14 04:35:28.207352365 +0000 UTC m=+1560.288289689" Feb 14 04:35:28 crc kubenswrapper[4867]: I0214 04:35:28.215997 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 14 04:35:28 crc kubenswrapper[4867]: I0214 04:35:28.230906 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 14 04:35:28 crc kubenswrapper[4867]: I0214 04:35:28.239104 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 14 04:35:28 crc kubenswrapper[4867]: I0214 04:35:28.263751 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 14 04:35:29 crc kubenswrapper[4867]: I0214 04:35:29.169874 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 14 04:35:29 crc kubenswrapper[4867]: I0214 04:35:29.180937 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.212209 4867 generic.go:334] "Generic (PLEG): container finished" podID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerID="a035303162febd05e4c69dbea4b23655bfc8fbf0f1bef5f71200bbb4908c72f6" exitCode=0 Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.214415 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e2abd9c-e70a-4c49-99e2-d8f2606d3916","Type":"ContainerDied","Data":"a035303162febd05e4c69dbea4b23655bfc8fbf0f1bef5f71200bbb4908c72f6"} Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.251746 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.251817 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.416891 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.574249 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-scripts\") pod \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.575882 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-sg-core-conf-yaml\") pod \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.576324 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td9tc\" (UniqueName: \"kubernetes.io/projected/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-kube-api-access-td9tc\") pod \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.576409 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-config-data\") pod \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.576656 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-run-httpd\") pod \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.576805 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-log-httpd\") pod \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.577394 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-combined-ca-bundle\") pod \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\" (UID: \"1e2abd9c-e70a-4c49-99e2-d8f2606d3916\") " Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.577849 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1e2abd9c-e70a-4c49-99e2-d8f2606d3916" (UID: "1e2abd9c-e70a-4c49-99e2-d8f2606d3916"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.578223 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1e2abd9c-e70a-4c49-99e2-d8f2606d3916" (UID: "1e2abd9c-e70a-4c49-99e2-d8f2606d3916"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.578427 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.578535 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.582428 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-kube-api-access-td9tc" (OuterVolumeSpecName: "kube-api-access-td9tc") pod "1e2abd9c-e70a-4c49-99e2-d8f2606d3916" (UID: "1e2abd9c-e70a-4c49-99e2-d8f2606d3916"). InnerVolumeSpecName "kube-api-access-td9tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.583349 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-scripts" (OuterVolumeSpecName: "scripts") pod "1e2abd9c-e70a-4c49-99e2-d8f2606d3916" (UID: "1e2abd9c-e70a-4c49-99e2-d8f2606d3916"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.622494 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1e2abd9c-e70a-4c49-99e2-d8f2606d3916" (UID: "1e2abd9c-e70a-4c49-99e2-d8f2606d3916"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.681366 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.681639 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.681711 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td9tc\" (UniqueName: \"kubernetes.io/projected/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-kube-api-access-td9tc\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.686639 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e2abd9c-e70a-4c49-99e2-d8f2606d3916" (UID: "1e2abd9c-e70a-4c49-99e2-d8f2606d3916"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.710563 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-config-data" (OuterVolumeSpecName: "config-data") pod "1e2abd9c-e70a-4c49-99e2-d8f2606d3916" (UID: "1e2abd9c-e70a-4c49-99e2-d8f2606d3916"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.784862 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:31 crc kubenswrapper[4867]: I0214 04:35:31.785288 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e2abd9c-e70a-4c49-99e2-d8f2606d3916-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.225949 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1e2abd9c-e70a-4c49-99e2-d8f2606d3916","Type":"ContainerDied","Data":"36752e6e5f2c31ee736f7a9a28d860706f6c2685f55f602f485609bff4a72cd3"} Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.227248 4867 scope.go:117] "RemoveContainer" containerID="d3d7a5de7a46e9bf58582679cea6e78b22e33da4c8a17769dcc662cfd68cc950" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.227198 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.279422 4867 scope.go:117] "RemoveContainer" containerID="1fb8c5a5621f2d512d37075d0d5b21a45a195911425ead599feb944d6a4de9ab" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.285674 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.312259 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.330686 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:35:32 crc kubenswrapper[4867]: E0214 04:35:32.331344 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerName="ceilometer-notification-agent" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.331366 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerName="ceilometer-notification-agent" Feb 14 04:35:32 crc kubenswrapper[4867]: E0214 04:35:32.331391 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerName="sg-core" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.331398 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerName="sg-core" Feb 14 04:35:32 crc kubenswrapper[4867]: E0214 04:35:32.331416 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerName="ceilometer-central-agent" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.331422 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerName="ceilometer-central-agent" Feb 14 04:35:32 crc kubenswrapper[4867]: E0214 04:35:32.331438 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerName="proxy-httpd" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.331444 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerName="proxy-httpd" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.331713 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerName="ceilometer-central-agent" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.331727 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerName="sg-core" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.331745 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerName="ceilometer-notification-agent" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.331760 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" containerName="proxy-httpd" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.333991 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.337146 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.337418 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.337554 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.337723 4867 scope.go:117] "RemoveContainer" containerID="a035303162febd05e4c69dbea4b23655bfc8fbf0f1bef5f71200bbb4908c72f6" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.343712 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.381433 4867 scope.go:117] "RemoveContainer" containerID="cc831c892e8c013abef53560483873aaf79b87e38bc3a6d0d64c21cf9f9314c5" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.515661 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.515851 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.515949 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/755b32e7-a73b-4823-a57a-9ff2346f37ba-log-httpd\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.516031 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/755b32e7-a73b-4823-a57a-9ff2346f37ba-run-httpd\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.516077 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq4lx\" (UniqueName: \"kubernetes.io/projected/755b32e7-a73b-4823-a57a-9ff2346f37ba-kube-api-access-xq4lx\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.516255 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.516340 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-scripts\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.516411 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-config-data\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.620122 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.620207 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-scripts\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.620270 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-config-data\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.620537 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.620630 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.620673 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/755b32e7-a73b-4823-a57a-9ff2346f37ba-log-httpd\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.620739 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/755b32e7-a73b-4823-a57a-9ff2346f37ba-run-httpd\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.620776 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq4lx\" (UniqueName: \"kubernetes.io/projected/755b32e7-a73b-4823-a57a-9ff2346f37ba-kube-api-access-xq4lx\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.621550 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/755b32e7-a73b-4823-a57a-9ff2346f37ba-log-httpd\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.622076 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/755b32e7-a73b-4823-a57a-9ff2346f37ba-run-httpd\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.627891 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.628220 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.628407 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-scripts\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.630098 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-config-data\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.630775 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.640469 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq4lx\" (UniqueName: \"kubernetes.io/projected/755b32e7-a73b-4823-a57a-9ff2346f37ba-kube-api-access-xq4lx\") pod \"ceilometer-0\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " pod="openstack/ceilometer-0" Feb 14 04:35:32 crc kubenswrapper[4867]: I0214 04:35:32.682309 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:35:33 crc kubenswrapper[4867]: I0214 04:35:33.015777 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e2abd9c-e70a-4c49-99e2-d8f2606d3916" path="/var/lib/kubelet/pods/1e2abd9c-e70a-4c49-99e2-d8f2606d3916/volumes" Feb 14 04:35:33 crc kubenswrapper[4867]: I0214 04:35:33.154434 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:35:33 crc kubenswrapper[4867]: I0214 04:35:33.242197 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"755b32e7-a73b-4823-a57a-9ff2346f37ba","Type":"ContainerStarted","Data":"73ec0567b19c96951a830a41b4544085988752f18988cc5174bd34b76d04f7d9"} Feb 14 04:35:34 crc kubenswrapper[4867]: I0214 04:35:34.266736 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"755b32e7-a73b-4823-a57a-9ff2346f37ba","Type":"ContainerStarted","Data":"da180bbe3f204dbafda3ff9411b5f7ce6de88f48145b022bced6575ef8415899"} Feb 14 04:35:35 crc kubenswrapper[4867]: I0214 04:35:35.288797 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"755b32e7-a73b-4823-a57a-9ff2346f37ba","Type":"ContainerStarted","Data":"925a585863d08622a1aaa17cd592d436946e2f7543ad7a339de42ffb5db6ed88"} Feb 14 04:35:35 crc kubenswrapper[4867]: I0214 04:35:35.829630 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 14 04:35:36 crc kubenswrapper[4867]: I0214 04:35:36.306500 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"755b32e7-a73b-4823-a57a-9ff2346f37ba","Type":"ContainerStarted","Data":"dc1cf3121882c456defd0b584e2d3e7cab7b3b69157d5a2371159fa03ae59f2d"} Feb 14 04:35:37 crc kubenswrapper[4867]: I0214 04:35:37.329070 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"755b32e7-a73b-4823-a57a-9ff2346f37ba","Type":"ContainerStarted","Data":"0447a5810775684932ac15e3424c1b15be46ff0f806cbba24fd777ce41cbccc0"} Feb 14 04:35:37 crc kubenswrapper[4867]: I0214 04:35:37.329836 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 04:35:37 crc kubenswrapper[4867]: I0214 04:35:37.361387 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.745870458 podStartE2EDuration="5.361362372s" podCreationTimestamp="2026-02-14 04:35:32 +0000 UTC" firstStartedPulling="2026-02-14 04:35:33.152919926 +0000 UTC m=+1565.233857240" lastFinishedPulling="2026-02-14 04:35:36.76841184 +0000 UTC m=+1568.849349154" observedRunningTime="2026-02-14 04:35:37.359018609 +0000 UTC m=+1569.439955943" watchObservedRunningTime="2026-02-14 04:35:37.361362372 +0000 UTC m=+1569.442299706" Feb 14 04:36:01 crc kubenswrapper[4867]: I0214 04:36:01.250644 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:36:01 crc kubenswrapper[4867]: I0214 04:36:01.251348 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:36:01 crc kubenswrapper[4867]: I0214 04:36:01.251396 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:36:01 crc kubenswrapper[4867]: I0214 04:36:01.252841 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 04:36:01 crc kubenswrapper[4867]: I0214 04:36:01.252906 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" gracePeriod=600 Feb 14 04:36:01 crc kubenswrapper[4867]: E0214 04:36:01.374331 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:36:01 crc kubenswrapper[4867]: I0214 04:36:01.631656 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" exitCode=0 Feb 14 04:36:01 crc kubenswrapper[4867]: I0214 04:36:01.631699 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e"} Feb 14 04:36:01 crc kubenswrapper[4867]: I0214 04:36:01.631736 4867 scope.go:117] "RemoveContainer" containerID="9c4b967cf6b24751f9f07fc3f33e355390aef9adbb8efd8f22637fd0bfe6c0be" Feb 14 04:36:01 crc kubenswrapper[4867]: I0214 04:36:01.632795 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:36:01 crc kubenswrapper[4867]: E0214 04:36:01.633331 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:36:02 crc kubenswrapper[4867]: I0214 04:36:02.692364 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 14 04:36:10 crc kubenswrapper[4867]: I0214 04:36:10.009602 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kwldn"] Feb 14 04:36:10 crc kubenswrapper[4867]: I0214 04:36:10.014688 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:10 crc kubenswrapper[4867]: I0214 04:36:10.022056 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kwldn"] Feb 14 04:36:10 crc kubenswrapper[4867]: I0214 04:36:10.180141 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-utilities\") pod \"certified-operators-kwldn\" (UID: \"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0\") " pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:10 crc kubenswrapper[4867]: I0214 04:36:10.180321 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snc9h\" (UniqueName: \"kubernetes.io/projected/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-kube-api-access-snc9h\") pod \"certified-operators-kwldn\" (UID: \"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0\") " pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:10 crc kubenswrapper[4867]: I0214 04:36:10.180427 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-catalog-content\") pod \"certified-operators-kwldn\" (UID: \"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0\") " pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:10 crc kubenswrapper[4867]: I0214 04:36:10.282246 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-utilities\") pod \"certified-operators-kwldn\" (UID: \"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0\") " pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:10 crc kubenswrapper[4867]: I0214 04:36:10.282373 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snc9h\" (UniqueName: \"kubernetes.io/projected/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-kube-api-access-snc9h\") pod \"certified-operators-kwldn\" (UID: \"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0\") " pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:10 crc kubenswrapper[4867]: I0214 04:36:10.282463 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-catalog-content\") pod \"certified-operators-kwldn\" (UID: \"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0\") " pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:10 crc kubenswrapper[4867]: I0214 04:36:10.283042 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-utilities\") pod \"certified-operators-kwldn\" (UID: \"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0\") " pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:10 crc kubenswrapper[4867]: I0214 04:36:10.283576 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-catalog-content\") pod \"certified-operators-kwldn\" (UID: \"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0\") " pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:10 crc kubenswrapper[4867]: I0214 04:36:10.303401 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snc9h\" (UniqueName: \"kubernetes.io/projected/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-kube-api-access-snc9h\") pod \"certified-operators-kwldn\" (UID: \"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0\") " pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:10 crc kubenswrapper[4867]: I0214 04:36:10.357445 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:10 crc kubenswrapper[4867]: I0214 04:36:10.862241 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kwldn"] Feb 14 04:36:10 crc kubenswrapper[4867]: I0214 04:36:10.964334 4867 scope.go:117] "RemoveContainer" containerID="026325c8f6cfe452fbbf5a283d6335d1b62be9618bc89fae94bbe5dcc2c9e96d" Feb 14 04:36:11 crc kubenswrapper[4867]: I0214 04:36:11.005224 4867 scope.go:117] "RemoveContainer" containerID="f68abce2a11886ea053ab13b7ebbe72ba1f8d7abcfad4ba7b26252a8c0000f25" Feb 14 04:36:11 crc kubenswrapper[4867]: I0214 04:36:11.039650 4867 scope.go:117] "RemoveContainer" containerID="7429acc7d9da73b9750d17def9d8240155c7d41dbd196ce0d4607a1d9b14419f" Feb 14 04:36:11 crc kubenswrapper[4867]: I0214 04:36:11.751413 4867 generic.go:334] "Generic (PLEG): container finished" podID="77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" containerID="5253dc65bb4e2a66c82df57f3fab0290cc8ebb76baf27354b2a9c4455891c781" exitCode=0 Feb 14 04:36:11 crc kubenswrapper[4867]: I0214 04:36:11.751638 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwldn" event={"ID":"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0","Type":"ContainerDied","Data":"5253dc65bb4e2a66c82df57f3fab0290cc8ebb76baf27354b2a9c4455891c781"} Feb 14 04:36:11 crc kubenswrapper[4867]: I0214 04:36:11.751754 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwldn" event={"ID":"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0","Type":"ContainerStarted","Data":"f81d9f3f5e58496407123bbe89b13b2f4384e5424f5ed4516e82d1a0c14bf576"} Feb 14 04:36:12 crc kubenswrapper[4867]: I0214 04:36:12.768558 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwldn" event={"ID":"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0","Type":"ContainerStarted","Data":"4f2b9b821ddbdb0d02349dca656687220bddd2bf4415503ac614d53801212cea"} Feb 14 04:36:13 crc kubenswrapper[4867]: I0214 04:36:13.780267 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-246z7"] Feb 14 04:36:13 crc kubenswrapper[4867]: I0214 04:36:13.795466 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-246z7"] Feb 14 04:36:13 crc kubenswrapper[4867]: I0214 04:36:13.839877 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-l8hr2"] Feb 14 04:36:13 crc kubenswrapper[4867]: I0214 04:36:13.842176 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-l8hr2" Feb 14 04:36:13 crc kubenswrapper[4867]: I0214 04:36:13.855522 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-l8hr2"] Feb 14 04:36:13 crc kubenswrapper[4867]: I0214 04:36:13.992046 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j82w7\" (UniqueName: \"kubernetes.io/projected/632c48c8-f0d5-4dc9-823e-fa96b9265e97-kube-api-access-j82w7\") pod \"heat-db-sync-l8hr2\" (UID: \"632c48c8-f0d5-4dc9-823e-fa96b9265e97\") " pod="openstack/heat-db-sync-l8hr2" Feb 14 04:36:13 crc kubenswrapper[4867]: I0214 04:36:13.992161 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/632c48c8-f0d5-4dc9-823e-fa96b9265e97-combined-ca-bundle\") pod \"heat-db-sync-l8hr2\" (UID: \"632c48c8-f0d5-4dc9-823e-fa96b9265e97\") " pod="openstack/heat-db-sync-l8hr2" Feb 14 04:36:13 crc kubenswrapper[4867]: I0214 04:36:13.992336 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/632c48c8-f0d5-4dc9-823e-fa96b9265e97-config-data\") pod \"heat-db-sync-l8hr2\" (UID: \"632c48c8-f0d5-4dc9-823e-fa96b9265e97\") " pod="openstack/heat-db-sync-l8hr2" Feb 14 04:36:13 crc kubenswrapper[4867]: I0214 04:36:13.996972 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:36:13 crc kubenswrapper[4867]: E0214 04:36:13.997431 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:36:14 crc kubenswrapper[4867]: I0214 04:36:14.095532 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j82w7\" (UniqueName: \"kubernetes.io/projected/632c48c8-f0d5-4dc9-823e-fa96b9265e97-kube-api-access-j82w7\") pod \"heat-db-sync-l8hr2\" (UID: \"632c48c8-f0d5-4dc9-823e-fa96b9265e97\") " pod="openstack/heat-db-sync-l8hr2" Feb 14 04:36:14 crc kubenswrapper[4867]: I0214 04:36:14.095876 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/632c48c8-f0d5-4dc9-823e-fa96b9265e97-combined-ca-bundle\") pod \"heat-db-sync-l8hr2\" (UID: \"632c48c8-f0d5-4dc9-823e-fa96b9265e97\") " pod="openstack/heat-db-sync-l8hr2" Feb 14 04:36:14 crc kubenswrapper[4867]: I0214 04:36:14.096946 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/632c48c8-f0d5-4dc9-823e-fa96b9265e97-config-data\") pod \"heat-db-sync-l8hr2\" (UID: \"632c48c8-f0d5-4dc9-823e-fa96b9265e97\") " pod="openstack/heat-db-sync-l8hr2" Feb 14 04:36:14 crc kubenswrapper[4867]: I0214 04:36:14.104162 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/632c48c8-f0d5-4dc9-823e-fa96b9265e97-config-data\") pod \"heat-db-sync-l8hr2\" (UID: \"632c48c8-f0d5-4dc9-823e-fa96b9265e97\") " pod="openstack/heat-db-sync-l8hr2" Feb 14 04:36:14 crc kubenswrapper[4867]: I0214 04:36:14.102928 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/632c48c8-f0d5-4dc9-823e-fa96b9265e97-combined-ca-bundle\") pod \"heat-db-sync-l8hr2\" (UID: \"632c48c8-f0d5-4dc9-823e-fa96b9265e97\") " pod="openstack/heat-db-sync-l8hr2" Feb 14 04:36:14 crc kubenswrapper[4867]: I0214 04:36:14.115469 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j82w7\" (UniqueName: \"kubernetes.io/projected/632c48c8-f0d5-4dc9-823e-fa96b9265e97-kube-api-access-j82w7\") pod \"heat-db-sync-l8hr2\" (UID: \"632c48c8-f0d5-4dc9-823e-fa96b9265e97\") " pod="openstack/heat-db-sync-l8hr2" Feb 14 04:36:14 crc kubenswrapper[4867]: I0214 04:36:14.182843 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-l8hr2" Feb 14 04:36:14 crc kubenswrapper[4867]: I0214 04:36:14.714437 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-l8hr2"] Feb 14 04:36:14 crc kubenswrapper[4867]: W0214 04:36:14.714924 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod632c48c8_f0d5_4dc9_823e_fa96b9265e97.slice/crio-f6d7447bc4808aa0ae450dfc090bd3e6cef5e2bf5c0d0482fa7c73bb4eea0eab WatchSource:0}: Error finding container f6d7447bc4808aa0ae450dfc090bd3e6cef5e2bf5c0d0482fa7c73bb4eea0eab: Status 404 returned error can't find the container with id f6d7447bc4808aa0ae450dfc090bd3e6cef5e2bf5c0d0482fa7c73bb4eea0eab Feb 14 04:36:14 crc kubenswrapper[4867]: I0214 04:36:14.797661 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-l8hr2" event={"ID":"632c48c8-f0d5-4dc9-823e-fa96b9265e97","Type":"ContainerStarted","Data":"f6d7447bc4808aa0ae450dfc090bd3e6cef5e2bf5c0d0482fa7c73bb4eea0eab"} Feb 14 04:36:15 crc kubenswrapper[4867]: I0214 04:36:15.014462 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18fb2b12-f922-4976-8e05-6e78a8751456" path="/var/lib/kubelet/pods/18fb2b12-f922-4976-8e05-6e78a8751456/volumes" Feb 14 04:36:15 crc kubenswrapper[4867]: I0214 04:36:15.809785 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 04:36:15 crc kubenswrapper[4867]: I0214 04:36:15.821680 4867 generic.go:334] "Generic (PLEG): container finished" podID="77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" containerID="4f2b9b821ddbdb0d02349dca656687220bddd2bf4415503ac614d53801212cea" exitCode=0 Feb 14 04:36:15 crc kubenswrapper[4867]: I0214 04:36:15.821725 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwldn" event={"ID":"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0","Type":"ContainerDied","Data":"4f2b9b821ddbdb0d02349dca656687220bddd2bf4415503ac614d53801212cea"} Feb 14 04:36:16 crc kubenswrapper[4867]: I0214 04:36:16.093623 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:36:16 crc kubenswrapper[4867]: I0214 04:36:16.093931 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerName="ceilometer-central-agent" containerID="cri-o://da180bbe3f204dbafda3ff9411b5f7ce6de88f48145b022bced6575ef8415899" gracePeriod=30 Feb 14 04:36:16 crc kubenswrapper[4867]: I0214 04:36:16.094074 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerName="sg-core" containerID="cri-o://dc1cf3121882c456defd0b584e2d3e7cab7b3b69157d5a2371159fa03ae59f2d" gracePeriod=30 Feb 14 04:36:16 crc kubenswrapper[4867]: I0214 04:36:16.094160 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerName="ceilometer-notification-agent" containerID="cri-o://925a585863d08622a1aaa17cd592d436946e2f7543ad7a339de42ffb5db6ed88" gracePeriod=30 Feb 14 04:36:16 crc kubenswrapper[4867]: I0214 04:36:16.094242 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerName="proxy-httpd" containerID="cri-o://0447a5810775684932ac15e3424c1b15be46ff0f806cbba24fd777ce41cbccc0" gracePeriod=30 Feb 14 04:36:16 crc kubenswrapper[4867]: I0214 04:36:16.845858 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwldn" event={"ID":"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0","Type":"ContainerStarted","Data":"2e95723f53dda67e6fbff64267cd010c5186a50cb85b25868b75a1965fb93aa5"} Feb 14 04:36:16 crc kubenswrapper[4867]: I0214 04:36:16.850708 4867 generic.go:334] "Generic (PLEG): container finished" podID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerID="0447a5810775684932ac15e3424c1b15be46ff0f806cbba24fd777ce41cbccc0" exitCode=0 Feb 14 04:36:16 crc kubenswrapper[4867]: I0214 04:36:16.850763 4867 generic.go:334] "Generic (PLEG): container finished" podID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerID="dc1cf3121882c456defd0b584e2d3e7cab7b3b69157d5a2371159fa03ae59f2d" exitCode=2 Feb 14 04:36:16 crc kubenswrapper[4867]: I0214 04:36:16.850774 4867 generic.go:334] "Generic (PLEG): container finished" podID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerID="da180bbe3f204dbafda3ff9411b5f7ce6de88f48145b022bced6575ef8415899" exitCode=0 Feb 14 04:36:16 crc kubenswrapper[4867]: I0214 04:36:16.850798 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"755b32e7-a73b-4823-a57a-9ff2346f37ba","Type":"ContainerDied","Data":"0447a5810775684932ac15e3424c1b15be46ff0f806cbba24fd777ce41cbccc0"} Feb 14 04:36:16 crc kubenswrapper[4867]: I0214 04:36:16.850830 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"755b32e7-a73b-4823-a57a-9ff2346f37ba","Type":"ContainerDied","Data":"dc1cf3121882c456defd0b584e2d3e7cab7b3b69157d5a2371159fa03ae59f2d"} Feb 14 04:36:16 crc kubenswrapper[4867]: I0214 04:36:16.850841 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"755b32e7-a73b-4823-a57a-9ff2346f37ba","Type":"ContainerDied","Data":"da180bbe3f204dbafda3ff9411b5f7ce6de88f48145b022bced6575ef8415899"} Feb 14 04:36:16 crc kubenswrapper[4867]: I0214 04:36:16.877836 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kwldn" podStartSLOduration=3.326191045 podStartE2EDuration="7.877814892s" podCreationTimestamp="2026-02-14 04:36:09 +0000 UTC" firstStartedPulling="2026-02-14 04:36:11.755282067 +0000 UTC m=+1603.836219391" lastFinishedPulling="2026-02-14 04:36:16.306905924 +0000 UTC m=+1608.387843238" observedRunningTime="2026-02-14 04:36:16.872043111 +0000 UTC m=+1608.952980425" watchObservedRunningTime="2026-02-14 04:36:16.877814892 +0000 UTC m=+1608.958752206" Feb 14 04:36:17 crc kubenswrapper[4867]: I0214 04:36:17.098849 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 04:36:17 crc kubenswrapper[4867]: I0214 04:36:17.869446 4867 generic.go:334] "Generic (PLEG): container finished" podID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerID="925a585863d08622a1aaa17cd592d436946e2f7543ad7a339de42ffb5db6ed88" exitCode=0 Feb 14 04:36:17 crc kubenswrapper[4867]: I0214 04:36:17.869816 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"755b32e7-a73b-4823-a57a-9ff2346f37ba","Type":"ContainerDied","Data":"925a585863d08622a1aaa17cd592d436946e2f7543ad7a339de42ffb5db6ed88"} Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.490122 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.643494 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/755b32e7-a73b-4823-a57a-9ff2346f37ba-log-httpd\") pod \"755b32e7-a73b-4823-a57a-9ff2346f37ba\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.643699 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-sg-core-conf-yaml\") pod \"755b32e7-a73b-4823-a57a-9ff2346f37ba\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.643761 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-combined-ca-bundle\") pod \"755b32e7-a73b-4823-a57a-9ff2346f37ba\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.643852 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-ceilometer-tls-certs\") pod \"755b32e7-a73b-4823-a57a-9ff2346f37ba\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.643935 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/755b32e7-a73b-4823-a57a-9ff2346f37ba-run-httpd\") pod \"755b32e7-a73b-4823-a57a-9ff2346f37ba\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.643999 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq4lx\" (UniqueName: \"kubernetes.io/projected/755b32e7-a73b-4823-a57a-9ff2346f37ba-kube-api-access-xq4lx\") pod \"755b32e7-a73b-4823-a57a-9ff2346f37ba\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.644058 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-scripts\") pod \"755b32e7-a73b-4823-a57a-9ff2346f37ba\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.644164 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-config-data\") pod \"755b32e7-a73b-4823-a57a-9ff2346f37ba\" (UID: \"755b32e7-a73b-4823-a57a-9ff2346f37ba\") " Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.644307 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/755b32e7-a73b-4823-a57a-9ff2346f37ba-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "755b32e7-a73b-4823-a57a-9ff2346f37ba" (UID: "755b32e7-a73b-4823-a57a-9ff2346f37ba"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.644763 4867 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/755b32e7-a73b-4823-a57a-9ff2346f37ba-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.644955 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/755b32e7-a73b-4823-a57a-9ff2346f37ba-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "755b32e7-a73b-4823-a57a-9ff2346f37ba" (UID: "755b32e7-a73b-4823-a57a-9ff2346f37ba"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.650139 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-scripts" (OuterVolumeSpecName: "scripts") pod "755b32e7-a73b-4823-a57a-9ff2346f37ba" (UID: "755b32e7-a73b-4823-a57a-9ff2346f37ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.650797 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/755b32e7-a73b-4823-a57a-9ff2346f37ba-kube-api-access-xq4lx" (OuterVolumeSpecName: "kube-api-access-xq4lx") pod "755b32e7-a73b-4823-a57a-9ff2346f37ba" (UID: "755b32e7-a73b-4823-a57a-9ff2346f37ba"). InnerVolumeSpecName "kube-api-access-xq4lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.687958 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "755b32e7-a73b-4823-a57a-9ff2346f37ba" (UID: "755b32e7-a73b-4823-a57a-9ff2346f37ba"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.745532 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "755b32e7-a73b-4823-a57a-9ff2346f37ba" (UID: "755b32e7-a73b-4823-a57a-9ff2346f37ba"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.747146 4867 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.747180 4867 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/755b32e7-a73b-4823-a57a-9ff2346f37ba-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.747189 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq4lx\" (UniqueName: \"kubernetes.io/projected/755b32e7-a73b-4823-a57a-9ff2346f37ba-kube-api-access-xq4lx\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.747199 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.747208 4867 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.784648 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "755b32e7-a73b-4823-a57a-9ff2346f37ba" (UID: "755b32e7-a73b-4823-a57a-9ff2346f37ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.849602 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-config-data" (OuterVolumeSpecName: "config-data") pod "755b32e7-a73b-4823-a57a-9ff2346f37ba" (UID: "755b32e7-a73b-4823-a57a-9ff2346f37ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.850067 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.850104 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/755b32e7-a73b-4823-a57a-9ff2346f37ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.913686 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"755b32e7-a73b-4823-a57a-9ff2346f37ba","Type":"ContainerDied","Data":"73ec0567b19c96951a830a41b4544085988752f18988cc5174bd34b76d04f7d9"} Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.913756 4867 scope.go:117] "RemoveContainer" containerID="0447a5810775684932ac15e3424c1b15be46ff0f806cbba24fd777ce41cbccc0" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.913940 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:36:18 crc kubenswrapper[4867]: I0214 04:36:18.990453 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.007413 4867 scope.go:117] "RemoveContainer" containerID="dc1cf3121882c456defd0b584e2d3e7cab7b3b69157d5a2371159fa03ae59f2d" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.042755 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.054572 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:36:19 crc kubenswrapper[4867]: E0214 04:36:19.055322 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerName="ceilometer-central-agent" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.055347 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerName="ceilometer-central-agent" Feb 14 04:36:19 crc kubenswrapper[4867]: E0214 04:36:19.055367 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerName="proxy-httpd" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.055378 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerName="proxy-httpd" Feb 14 04:36:19 crc kubenswrapper[4867]: E0214 04:36:19.055409 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerName="sg-core" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.055417 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerName="sg-core" Feb 14 04:36:19 crc kubenswrapper[4867]: E0214 04:36:19.055451 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerName="ceilometer-notification-agent" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.055463 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerName="ceilometer-notification-agent" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.055778 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerName="ceilometer-central-agent" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.055816 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerName="proxy-httpd" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.055832 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerName="ceilometer-notification-agent" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.055852 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" containerName="sg-core" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.058880 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.065552 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.065607 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.066048 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.070221 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.079375 4867 scope.go:117] "RemoveContainer" containerID="925a585863d08622a1aaa17cd592d436946e2f7543ad7a339de42ffb5db6ed88" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.129681 4867 scope.go:117] "RemoveContainer" containerID="da180bbe3f204dbafda3ff9411b5f7ce6de88f48145b022bced6575ef8415899" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.164733 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27437fd9-2bc5-48ac-9e34-e733da15dd2b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.164948 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27437fd9-2bc5-48ac-9e34-e733da15dd2b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.165039 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27437fd9-2bc5-48ac-9e34-e733da15dd2b-run-httpd\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.165226 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27437fd9-2bc5-48ac-9e34-e733da15dd2b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.165545 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27437fd9-2bc5-48ac-9e34-e733da15dd2b-config-data\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.165603 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27437fd9-2bc5-48ac-9e34-e733da15dd2b-scripts\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.165656 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl7qf\" (UniqueName: \"kubernetes.io/projected/27437fd9-2bc5-48ac-9e34-e733da15dd2b-kube-api-access-bl7qf\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.166013 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27437fd9-2bc5-48ac-9e34-e733da15dd2b-log-httpd\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.268667 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27437fd9-2bc5-48ac-9e34-e733da15dd2b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.268744 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27437fd9-2bc5-48ac-9e34-e733da15dd2b-config-data\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.268784 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27437fd9-2bc5-48ac-9e34-e733da15dd2b-scripts\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.268849 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl7qf\" (UniqueName: \"kubernetes.io/projected/27437fd9-2bc5-48ac-9e34-e733da15dd2b-kube-api-access-bl7qf\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.269005 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27437fd9-2bc5-48ac-9e34-e733da15dd2b-log-httpd\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.269091 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27437fd9-2bc5-48ac-9e34-e733da15dd2b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.269132 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27437fd9-2bc5-48ac-9e34-e733da15dd2b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.269182 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27437fd9-2bc5-48ac-9e34-e733da15dd2b-run-httpd\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.270983 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27437fd9-2bc5-48ac-9e34-e733da15dd2b-log-httpd\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.271069 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27437fd9-2bc5-48ac-9e34-e733da15dd2b-run-httpd\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.275025 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27437fd9-2bc5-48ac-9e34-e733da15dd2b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.275056 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27437fd9-2bc5-48ac-9e34-e733da15dd2b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.278456 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27437fd9-2bc5-48ac-9e34-e733da15dd2b-config-data\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.287395 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl7qf\" (UniqueName: \"kubernetes.io/projected/27437fd9-2bc5-48ac-9e34-e733da15dd2b-kube-api-access-bl7qf\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.290758 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27437fd9-2bc5-48ac-9e34-e733da15dd2b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.305306 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27437fd9-2bc5-48ac-9e34-e733da15dd2b-scripts\") pod \"ceilometer-0\" (UID: \"27437fd9-2bc5-48ac-9e34-e733da15dd2b\") " pod="openstack/ceilometer-0" Feb 14 04:36:19 crc kubenswrapper[4867]: I0214 04:36:19.378637 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 14 04:36:20 crc kubenswrapper[4867]: I0214 04:36:20.214035 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 14 04:36:20 crc kubenswrapper[4867]: I0214 04:36:20.357955 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:20 crc kubenswrapper[4867]: I0214 04:36:20.359564 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:20 crc kubenswrapper[4867]: I0214 04:36:20.988114 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27437fd9-2bc5-48ac-9e34-e733da15dd2b","Type":"ContainerStarted","Data":"d4f0bdf0dbd1d228ba52e053dd1cc643ebc3046d0b265d62590eb29358a8f187"} Feb 14 04:36:21 crc kubenswrapper[4867]: I0214 04:36:21.027616 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="755b32e7-a73b-4823-a57a-9ff2346f37ba" path="/var/lib/kubelet/pods/755b32e7-a73b-4823-a57a-9ff2346f37ba/volumes" Feb 14 04:36:21 crc kubenswrapper[4867]: I0214 04:36:21.418099 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-kwldn" podUID="77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" containerName="registry-server" probeResult="failure" output=< Feb 14 04:36:21 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 04:36:21 crc kubenswrapper[4867]: > Feb 14 04:36:21 crc kubenswrapper[4867]: I0214 04:36:21.573248 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="9bba5174-edd6-4e59-8b84-6c50439be88e" containerName="rabbitmq" containerID="cri-o://3a805b4a9b14096595ccbe2f2670f7820f5c356d6f6f2f30fc1ba861c96ba989" gracePeriod=604795 Feb 14 04:36:22 crc kubenswrapper[4867]: I0214 04:36:22.010539 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="e1e022d9-e2db-41eb-bbc8-36a85211a141" containerName="rabbitmq" containerID="cri-o://1c9536ee76daa0952682b4376762a2a587b803ad41d92cac29e3c1b5557102c7" gracePeriod=604796 Feb 14 04:36:27 crc kubenswrapper[4867]: I0214 04:36:27.210325 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="e1e022d9-e2db-41eb-bbc8-36a85211a141" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.126:5671: connect: connection refused" Feb 14 04:36:28 crc kubenswrapper[4867]: I0214 04:36:28.056391 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="9bba5174-edd6-4e59-8b84-6c50439be88e" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Feb 14 04:36:28 crc kubenswrapper[4867]: I0214 04:36:28.111085 4867 generic.go:334] "Generic (PLEG): container finished" podID="9bba5174-edd6-4e59-8b84-6c50439be88e" containerID="3a805b4a9b14096595ccbe2f2670f7820f5c356d6f6f2f30fc1ba861c96ba989" exitCode=0 Feb 14 04:36:28 crc kubenswrapper[4867]: I0214 04:36:28.111583 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9bba5174-edd6-4e59-8b84-6c50439be88e","Type":"ContainerDied","Data":"3a805b4a9b14096595ccbe2f2670f7820f5c356d6f6f2f30fc1ba861c96ba989"} Feb 14 04:36:29 crc kubenswrapper[4867]: I0214 04:36:29.017241 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:36:29 crc kubenswrapper[4867]: E0214 04:36:29.017961 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:36:29 crc kubenswrapper[4867]: I0214 04:36:29.128022 4867 generic.go:334] "Generic (PLEG): container finished" podID="e1e022d9-e2db-41eb-bbc8-36a85211a141" containerID="1c9536ee76daa0952682b4376762a2a587b803ad41d92cac29e3c1b5557102c7" exitCode=0 Feb 14 04:36:29 crc kubenswrapper[4867]: I0214 04:36:29.128115 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e1e022d9-e2db-41eb-bbc8-36a85211a141","Type":"ContainerDied","Data":"1c9536ee76daa0952682b4376762a2a587b803ad41d92cac29e3c1b5557102c7"} Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.039605 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-prh4d"] Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.042844 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.059029 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.085613 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-prh4d"] Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.102550 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.102730 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgct9\" (UniqueName: \"kubernetes.io/projected/323a0af9-9e80-476b-8315-e20a6dd41293-kube-api-access-lgct9\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.102772 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.102805 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.102823 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.102929 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.102953 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-config\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.205868 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgct9\" (UniqueName: \"kubernetes.io/projected/323a0af9-9e80-476b-8315-e20a6dd41293-kube-api-access-lgct9\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.206456 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.206546 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.206576 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.206798 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.206844 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-config\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.207279 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.207609 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.207608 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.208394 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-config\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.208890 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.209286 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.209864 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.232586 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgct9\" (UniqueName: \"kubernetes.io/projected/323a0af9-9e80-476b-8315-e20a6dd41293-kube-api-access-lgct9\") pod \"dnsmasq-dns-7d84b4d45c-prh4d\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.387641 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:31 crc kubenswrapper[4867]: I0214 04:36:31.425098 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-kwldn" podUID="77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" containerName="registry-server" probeResult="failure" output=< Feb 14 04:36:31 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 04:36:31 crc kubenswrapper[4867]: > Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.678498 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.698439 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.794271 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-tls\") pod \"9bba5174-edd6-4e59-8b84-6c50439be88e\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.794380 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-server-conf\") pod \"e1e022d9-e2db-41eb-bbc8-36a85211a141\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.794495 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9bba5174-edd6-4e59-8b84-6c50439be88e-pod-info\") pod \"9bba5174-edd6-4e59-8b84-6c50439be88e\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.794563 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrf6j\" (UniqueName: \"kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-kube-api-access-wrf6j\") pod \"e1e022d9-e2db-41eb-bbc8-36a85211a141\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.794590 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-tls\") pod \"e1e022d9-e2db-41eb-bbc8-36a85211a141\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.794661 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-config-data\") pod \"e1e022d9-e2db-41eb-bbc8-36a85211a141\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.794680 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-plugins-conf\") pod \"9bba5174-edd6-4e59-8b84-6c50439be88e\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.794703 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-plugins\") pod \"9bba5174-edd6-4e59-8b84-6c50439be88e\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.795281 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\") pod \"e1e022d9-e2db-41eb-bbc8-36a85211a141\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.796783 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d997565a-60ec-4873-b7c9-bde8044c981f\") pod \"9bba5174-edd6-4e59-8b84-6c50439be88e\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.796826 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-confd\") pod \"e1e022d9-e2db-41eb-bbc8-36a85211a141\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.796854 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e1e022d9-e2db-41eb-bbc8-36a85211a141-erlang-cookie-secret\") pod \"e1e022d9-e2db-41eb-bbc8-36a85211a141\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.796881 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9bba5174-edd6-4e59-8b84-6c50439be88e-erlang-cookie-secret\") pod \"9bba5174-edd6-4e59-8b84-6c50439be88e\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.796913 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-plugins\") pod \"e1e022d9-e2db-41eb-bbc8-36a85211a141\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.796955 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e1e022d9-e2db-41eb-bbc8-36a85211a141-pod-info\") pod \"e1e022d9-e2db-41eb-bbc8-36a85211a141\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.796986 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q676p\" (UniqueName: \"kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-kube-api-access-q676p\") pod \"9bba5174-edd6-4e59-8b84-6c50439be88e\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.797023 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-config-data\") pod \"9bba5174-edd6-4e59-8b84-6c50439be88e\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.797080 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-confd\") pod \"9bba5174-edd6-4e59-8b84-6c50439be88e\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.797120 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-erlang-cookie\") pod \"e1e022d9-e2db-41eb-bbc8-36a85211a141\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.797171 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-erlang-cookie\") pod \"9bba5174-edd6-4e59-8b84-6c50439be88e\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.797225 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-plugins-conf\") pod \"e1e022d9-e2db-41eb-bbc8-36a85211a141\" (UID: \"e1e022d9-e2db-41eb-bbc8-36a85211a141\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.797805 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-server-conf\") pod \"9bba5174-edd6-4e59-8b84-6c50439be88e\" (UID: \"9bba5174-edd6-4e59-8b84-6c50439be88e\") " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.806153 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9bba5174-edd6-4e59-8b84-6c50439be88e" (UID: "9bba5174-edd6-4e59-8b84-6c50439be88e"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.808422 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e1e022d9-e2db-41eb-bbc8-36a85211a141" (UID: "e1e022d9-e2db-41eb-bbc8-36a85211a141"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.819415 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e1e022d9-e2db-41eb-bbc8-36a85211a141" (UID: "e1e022d9-e2db-41eb-bbc8-36a85211a141"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.829386 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e1e022d9-e2db-41eb-bbc8-36a85211a141" (UID: "e1e022d9-e2db-41eb-bbc8-36a85211a141"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.840117 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9bba5174-edd6-4e59-8b84-6c50439be88e" (UID: "9bba5174-edd6-4e59-8b84-6c50439be88e"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.845096 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e1e022d9-e2db-41eb-bbc8-36a85211a141" (UID: "e1e022d9-e2db-41eb-bbc8-36a85211a141"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.852261 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-kube-api-access-wrf6j" (OuterVolumeSpecName: "kube-api-access-wrf6j") pod "e1e022d9-e2db-41eb-bbc8-36a85211a141" (UID: "e1e022d9-e2db-41eb-bbc8-36a85211a141"). InnerVolumeSpecName "kube-api-access-wrf6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.857179 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.857235 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.875833 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9bba5174-edd6-4e59-8b84-6c50439be88e" (UID: "9bba5174-edd6-4e59-8b84-6c50439be88e"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.885034 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9bba5174-edd6-4e59-8b84-6c50439be88e" (UID: "9bba5174-edd6-4e59-8b84-6c50439be88e"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.887872 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bba5174-edd6-4e59-8b84-6c50439be88e-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9bba5174-edd6-4e59-8b84-6c50439be88e" (UID: "9bba5174-edd6-4e59-8b84-6c50439be88e"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.888051 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1e022d9-e2db-41eb-bbc8-36a85211a141-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e1e022d9-e2db-41eb-bbc8-36a85211a141" (UID: "e1e022d9-e2db-41eb-bbc8-36a85211a141"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.888666 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e1e022d9-e2db-41eb-bbc8-36a85211a141-pod-info" (OuterVolumeSpecName: "pod-info") pod "e1e022d9-e2db-41eb-bbc8-36a85211a141" (UID: "e1e022d9-e2db-41eb-bbc8-36a85211a141"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.867923 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-kube-api-access-q676p" (OuterVolumeSpecName: "kube-api-access-q676p") pod "9bba5174-edd6-4e59-8b84-6c50439be88e" (UID: "9bba5174-edd6-4e59-8b84-6c50439be88e"). InnerVolumeSpecName "kube-api-access-q676p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.912622 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9bba5174-edd6-4e59-8b84-6c50439be88e-pod-info" (OuterVolumeSpecName: "pod-info") pod "9bba5174-edd6-4e59-8b84-6c50439be88e" (UID: "9bba5174-edd6-4e59-8b84-6c50439be88e"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.939874 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d997565a-60ec-4873-b7c9-bde8044c981f" (OuterVolumeSpecName: "persistence") pod "9bba5174-edd6-4e59-8b84-6c50439be88e" (UID: "9bba5174-edd6-4e59-8b84-6c50439be88e"). InnerVolumeSpecName "pvc-d997565a-60ec-4873-b7c9-bde8044c981f". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.953605 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce" (OuterVolumeSpecName: "persistence") pod "e1e022d9-e2db-41eb-bbc8-36a85211a141" (UID: "e1e022d9-e2db-41eb-bbc8-36a85211a141"). InnerVolumeSpecName "pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.972281 4867 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9bba5174-edd6-4e59-8b84-6c50439be88e-pod-info\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.972742 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrf6j\" (UniqueName: \"kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-kube-api-access-wrf6j\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.972756 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.972766 4867 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.972803 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\") on node \"crc\" " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.972823 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-d997565a-60ec-4873-b7c9-bde8044c981f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d997565a-60ec-4873-b7c9-bde8044c981f\") on node \"crc\" " Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.972838 4867 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e1e022d9-e2db-41eb-bbc8-36a85211a141-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.972852 4867 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9bba5174-edd6-4e59-8b84-6c50439be88e-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.972865 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.972878 4867 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e1e022d9-e2db-41eb-bbc8-36a85211a141-pod-info\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.972889 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q676p\" (UniqueName: \"kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-kube-api-access-q676p\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.972901 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.972912 4867 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.972922 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.973916 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-server-conf" (OuterVolumeSpecName: "server-conf") pod "e1e022d9-e2db-41eb-bbc8-36a85211a141" (UID: "e1e022d9-e2db-41eb-bbc8-36a85211a141"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.983849 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-config-data" (OuterVolumeSpecName: "config-data") pod "e1e022d9-e2db-41eb-bbc8-36a85211a141" (UID: "e1e022d9-e2db-41eb-bbc8-36a85211a141"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:36:33 crc kubenswrapper[4867]: I0214 04:36:33.987358 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-config-data" (OuterVolumeSpecName: "config-data") pod "9bba5174-edd6-4e59-8b84-6c50439be88e" (UID: "9bba5174-edd6-4e59-8b84-6c50439be88e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.004573 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-server-conf" (OuterVolumeSpecName: "server-conf") pod "9bba5174-edd6-4e59-8b84-6c50439be88e" (UID: "9bba5174-edd6-4e59-8b84-6c50439be88e"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.043888 4867 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.044313 4867 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-d997565a-60ec-4873-b7c9-bde8044c981f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d997565a-60ec-4873-b7c9-bde8044c981f") on node "crc" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.047483 4867 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.048436 4867 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce") on node "crc" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.076487 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.077270 4867 reconciler_common.go:293] "Volume detached for volume \"pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.077357 4867 reconciler_common.go:293] "Volume detached for volume \"pvc-d997565a-60ec-4873-b7c9-bde8044c981f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d997565a-60ec-4873-b7c9-bde8044c981f\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.077433 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.077498 4867 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9bba5174-edd6-4e59-8b84-6c50439be88e-server-conf\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.077610 4867 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e1e022d9-e2db-41eb-bbc8-36a85211a141-server-conf\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.090705 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e1e022d9-e2db-41eb-bbc8-36a85211a141" (UID: "e1e022d9-e2db-41eb-bbc8-36a85211a141"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.095760 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9bba5174-edd6-4e59-8b84-6c50439be88e" (UID: "9bba5174-edd6-4e59-8b84-6c50439be88e"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.181975 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e1e022d9-e2db-41eb-bbc8-36a85211a141-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.182012 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9bba5174-edd6-4e59-8b84-6c50439be88e-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.194245 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9bba5174-edd6-4e59-8b84-6c50439be88e","Type":"ContainerDied","Data":"1a22c1b816602c7a9c207095a5f963d6cce2df715e59142c62ec1b7539b424fc"} Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.194332 4867 scope.go:117] "RemoveContainer" containerID="3a805b4a9b14096595ccbe2f2670f7820f5c356d6f6f2f30fc1ba861c96ba989" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.194745 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.196883 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e1e022d9-e2db-41eb-bbc8-36a85211a141","Type":"ContainerDied","Data":"eff48d6ea9b314940f4e42275756ed44177eec1f24e83d25c5b5fe5435a8ea2e"} Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.198000 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.260087 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.277298 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.298706 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.317157 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.335566 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 04:36:34 crc kubenswrapper[4867]: E0214 04:36:34.336339 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1e022d9-e2db-41eb-bbc8-36a85211a141" containerName="setup-container" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.336363 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1e022d9-e2db-41eb-bbc8-36a85211a141" containerName="setup-container" Feb 14 04:36:34 crc kubenswrapper[4867]: E0214 04:36:34.336398 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bba5174-edd6-4e59-8b84-6c50439be88e" containerName="setup-container" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.336405 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bba5174-edd6-4e59-8b84-6c50439be88e" containerName="setup-container" Feb 14 04:36:34 crc kubenswrapper[4867]: E0214 04:36:34.336418 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bba5174-edd6-4e59-8b84-6c50439be88e" containerName="rabbitmq" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.336424 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bba5174-edd6-4e59-8b84-6c50439be88e" containerName="rabbitmq" Feb 14 04:36:34 crc kubenswrapper[4867]: E0214 04:36:34.336445 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1e022d9-e2db-41eb-bbc8-36a85211a141" containerName="rabbitmq" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.336451 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1e022d9-e2db-41eb-bbc8-36a85211a141" containerName="rabbitmq" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.336724 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1e022d9-e2db-41eb-bbc8-36a85211a141" containerName="rabbitmq" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.336746 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bba5174-edd6-4e59-8b84-6c50439be88e" containerName="rabbitmq" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.338541 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.341016 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.341375 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.347687 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.348308 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.351151 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.351602 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.352126 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-7gx8s" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.365136 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.368035 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.402148 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.415832 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489358 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c8afa7ab-eaaa-4558-99d5-c655cf271f62-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489410 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489432 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nntts\" (UniqueName: \"kubernetes.io/projected/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-kube-api-access-nntts\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489459 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzgv6\" (UniqueName: \"kubernetes.io/projected/c8afa7ab-eaaa-4558-99d5-c655cf271f62-kube-api-access-lzgv6\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489524 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489550 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c8afa7ab-eaaa-4558-99d5-c655cf271f62-config-data\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489567 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c8afa7ab-eaaa-4558-99d5-c655cf271f62-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489590 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c8afa7ab-eaaa-4558-99d5-c655cf271f62-server-conf\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489606 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c8afa7ab-eaaa-4558-99d5-c655cf271f62-pod-info\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489622 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489640 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c8afa7ab-eaaa-4558-99d5-c655cf271f62-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489667 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489697 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c8afa7ab-eaaa-4558-99d5-c655cf271f62-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489749 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489779 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d997565a-60ec-4873-b7c9-bde8044c981f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d997565a-60ec-4873-b7c9-bde8044c981f\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489805 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c8afa7ab-eaaa-4558-99d5-c655cf271f62-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489821 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c8afa7ab-eaaa-4558-99d5-c655cf271f62-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489835 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489876 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489902 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489932 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.489950 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.592419 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c8afa7ab-eaaa-4558-99d5-c655cf271f62-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.592482 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.592524 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nntts\" (UniqueName: \"kubernetes.io/projected/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-kube-api-access-nntts\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.592617 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzgv6\" (UniqueName: \"kubernetes.io/projected/c8afa7ab-eaaa-4558-99d5-c655cf271f62-kube-api-access-lzgv6\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.593024 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.593085 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c8afa7ab-eaaa-4558-99d5-c655cf271f62-config-data\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.593122 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c8afa7ab-eaaa-4558-99d5-c655cf271f62-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.593168 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c8afa7ab-eaaa-4558-99d5-c655cf271f62-server-conf\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.593203 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c8afa7ab-eaaa-4558-99d5-c655cf271f62-pod-info\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.593235 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.593271 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c8afa7ab-eaaa-4558-99d5-c655cf271f62-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.593336 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.593412 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c8afa7ab-eaaa-4558-99d5-c655cf271f62-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.593440 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.594007 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c8afa7ab-eaaa-4558-99d5-c655cf271f62-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.594115 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c8afa7ab-eaaa-4558-99d5-c655cf271f62-config-data\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.594460 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.594807 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c8afa7ab-eaaa-4558-99d5-c655cf271f62-server-conf\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.596373 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.596550 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.596635 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d997565a-60ec-4873-b7c9-bde8044c981f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d997565a-60ec-4873-b7c9-bde8044c981f\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.596721 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c8afa7ab-eaaa-4558-99d5-c655cf271f62-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.596755 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c8afa7ab-eaaa-4558-99d5-c655cf271f62-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.596782 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.596889 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.596948 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.597162 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.597209 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.597762 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c8afa7ab-eaaa-4558-99d5-c655cf271f62-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.598424 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c8afa7ab-eaaa-4558-99d5-c655cf271f62-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.601263 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.601363 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.602155 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c8afa7ab-eaaa-4558-99d5-c655cf271f62-pod-info\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.602829 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.603086 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c8afa7ab-eaaa-4558-99d5-c655cf271f62-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.606362 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.610786 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.611747 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c8afa7ab-eaaa-4558-99d5-c655cf271f62-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.612072 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c8afa7ab-eaaa-4558-99d5-c655cf271f62-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.613548 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.613586 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d997565a-60ec-4873-b7c9-bde8044c981f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d997565a-60ec-4873-b7c9-bde8044c981f\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/03d7bcff7c5d0322515cfcd29e48bfb1d0d6f9021316ba38c2028cf5ce82afee/globalmount\"" pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.613615 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.613646 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7c81ba883a06ca9e019b2d7c726ddbfb519b81827f5cfcee1e25c00752814b8f/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.617418 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzgv6\" (UniqueName: \"kubernetes.io/projected/c8afa7ab-eaaa-4558-99d5-c655cf271f62-kube-api-access-lzgv6\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.617898 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.631133 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nntts\" (UniqueName: \"kubernetes.io/projected/0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c-kube-api-access-nntts\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.698314 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d997565a-60ec-4873-b7c9-bde8044c981f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d997565a-60ec-4873-b7c9-bde8044c981f\") pod \"rabbitmq-server-2\" (UID: \"c8afa7ab-eaaa-4558-99d5-c655cf271f62\") " pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.701280 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8137c787-0a8b-490f-9eaf-e3821659a9ce\") pod \"rabbitmq-cell1-server-0\" (UID: \"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c\") " pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.707265 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Feb 14 04:36:34 crc kubenswrapper[4867]: I0214 04:36:34.975653 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:36:35 crc kubenswrapper[4867]: I0214 04:36:35.013300 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bba5174-edd6-4e59-8b84-6c50439be88e" path="/var/lib/kubelet/pods/9bba5174-edd6-4e59-8b84-6c50439be88e/volumes" Feb 14 04:36:35 crc kubenswrapper[4867]: I0214 04:36:35.015139 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1e022d9-e2db-41eb-bbc8-36a85211a141" path="/var/lib/kubelet/pods/e1e022d9-e2db-41eb-bbc8-36a85211a141/volumes" Feb 14 04:36:40 crc kubenswrapper[4867]: I0214 04:36:40.409090 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:40 crc kubenswrapper[4867]: I0214 04:36:40.480125 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:40 crc kubenswrapper[4867]: E0214 04:36:40.759714 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 14 04:36:40 crc kubenswrapper[4867]: E0214 04:36:40.759782 4867 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Feb 14 04:36:40 crc kubenswrapper[4867]: E0214 04:36:40.759920 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n644h55dh68fh5b9h59ch5ch676h577h677hc6h557h5cdhdh54dh5b7h5f8h59h549hc8h584h5cchf6hb8h66ch95h6dh544h594h54h58ch8dh648q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bl7qf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(27437fd9-2bc5-48ac-9e34-e733da15dd2b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:36:41 crc kubenswrapper[4867]: E0214 04:36:41.102002 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 14 04:36:41 crc kubenswrapper[4867]: E0214 04:36:41.102928 4867 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Feb 14 04:36:41 crc kubenswrapper[4867]: E0214 04:36:41.103106 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j82w7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-l8hr2_openstack(632c48c8-f0d5-4dc9-823e-fa96b9265e97): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 04:36:41 crc kubenswrapper[4867]: E0214 04:36:41.104636 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-l8hr2" podUID="632c48c8-f0d5-4dc9-823e-fa96b9265e97" Feb 14 04:36:41 crc kubenswrapper[4867]: I0214 04:36:41.129367 4867 scope.go:117] "RemoveContainer" containerID="cdd34e48fd8308f6fcb0879223cfb287fe4fad8d2d81caedd7f537716f873d08" Feb 14 04:36:41 crc kubenswrapper[4867]: I0214 04:36:41.220248 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kwldn"] Feb 14 04:36:41 crc kubenswrapper[4867]: I0214 04:36:41.290030 4867 scope.go:117] "RemoveContainer" containerID="1c9536ee76daa0952682b4376762a2a587b803ad41d92cac29e3c1b5557102c7" Feb 14 04:36:41 crc kubenswrapper[4867]: E0214 04:36:41.304720 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-l8hr2" podUID="632c48c8-f0d5-4dc9-823e-fa96b9265e97" Feb 14 04:36:41 crc kubenswrapper[4867]: I0214 04:36:41.350754 4867 scope.go:117] "RemoveContainer" containerID="262c6cf6afafb6e46f694f14f681aa82c37388eec461cacbdee05ba39ec4b230" Feb 14 04:36:41 crc kubenswrapper[4867]: I0214 04:36:41.699175 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Feb 14 04:36:41 crc kubenswrapper[4867]: W0214 04:36:41.709978 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8afa7ab_eaaa_4558_99d5_c655cf271f62.slice/crio-6b354dcbefff7fa7caee2aaba4c3bdf408e699b4754ffae69c10621f6a2fbf6e WatchSource:0}: Error finding container 6b354dcbefff7fa7caee2aaba4c3bdf408e699b4754ffae69c10621f6a2fbf6e: Status 404 returned error can't find the container with id 6b354dcbefff7fa7caee2aaba4c3bdf408e699b4754ffae69c10621f6a2fbf6e Feb 14 04:36:41 crc kubenswrapper[4867]: W0214 04:36:41.714655 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0901cb1a_f3c5_4eff_843b_cdb5c5c7a78c.slice/crio-70711207fa42dafdb29e5bf118bc6444bf48cc782f58da14bc872c1dee6f4995 WatchSource:0}: Error finding container 70711207fa42dafdb29e5bf118bc6444bf48cc782f58da14bc872c1dee6f4995: Status 404 returned error can't find the container with id 70711207fa42dafdb29e5bf118bc6444bf48cc782f58da14bc872c1dee6f4995 Feb 14 04:36:41 crc kubenswrapper[4867]: I0214 04:36:41.717005 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 14 04:36:41 crc kubenswrapper[4867]: W0214 04:36:41.720790 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod323a0af9_9e80_476b_8315_e20a6dd41293.slice/crio-09af16d3a20690cfc39a0ddb82488ac9f522f8bc29592b76f2d5e3c3d0549e4a WatchSource:0}: Error finding container 09af16d3a20690cfc39a0ddb82488ac9f522f8bc29592b76f2d5e3c3d0549e4a: Status 404 returned error can't find the container with id 09af16d3a20690cfc39a0ddb82488ac9f522f8bc29592b76f2d5e3c3d0549e4a Feb 14 04:36:41 crc kubenswrapper[4867]: I0214 04:36:41.729933 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-prh4d"] Feb 14 04:36:42 crc kubenswrapper[4867]: I0214 04:36:42.321316 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27437fd9-2bc5-48ac-9e34-e733da15dd2b","Type":"ContainerStarted","Data":"1557719afe78de0e4e29cad64cf88aca042467a40974993c334f00a52cde8934"} Feb 14 04:36:42 crc kubenswrapper[4867]: I0214 04:36:42.322913 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"c8afa7ab-eaaa-4558-99d5-c655cf271f62","Type":"ContainerStarted","Data":"6b354dcbefff7fa7caee2aaba4c3bdf408e699b4754ffae69c10621f6a2fbf6e"} Feb 14 04:36:42 crc kubenswrapper[4867]: I0214 04:36:42.323931 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c","Type":"ContainerStarted","Data":"70711207fa42dafdb29e5bf118bc6444bf48cc782f58da14bc872c1dee6f4995"} Feb 14 04:36:42 crc kubenswrapper[4867]: I0214 04:36:42.325327 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" event={"ID":"323a0af9-9e80-476b-8315-e20a6dd41293","Type":"ContainerStarted","Data":"77228a0f066425d86bdda1aaf9057e24f843996dcb0f57300b551c06e527bd22"} Feb 14 04:36:42 crc kubenswrapper[4867]: I0214 04:36:42.325354 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" event={"ID":"323a0af9-9e80-476b-8315-e20a6dd41293","Type":"ContainerStarted","Data":"09af16d3a20690cfc39a0ddb82488ac9f522f8bc29592b76f2d5e3c3d0549e4a"} Feb 14 04:36:42 crc kubenswrapper[4867]: I0214 04:36:42.328931 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kwldn" podUID="77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" containerName="registry-server" containerID="cri-o://2e95723f53dda67e6fbff64267cd010c5186a50cb85b25868b75a1965fb93aa5" gracePeriod=2 Feb 14 04:36:42 crc kubenswrapper[4867]: I0214 04:36:42.902971 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.038383 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snc9h\" (UniqueName: \"kubernetes.io/projected/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-kube-api-access-snc9h\") pod \"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0\" (UID: \"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0\") " Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.038741 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-catalog-content\") pod \"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0\" (UID: \"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0\") " Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.039081 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-utilities\") pod \"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0\" (UID: \"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0\") " Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.040184 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-utilities" (OuterVolumeSpecName: "utilities") pod "77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" (UID: "77086ddb-f1c4-4387-a1b3-a7b9389d4eb0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.040340 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.096098 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" (UID: "77086ddb-f1c4-4387-a1b3-a7b9389d4eb0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.142407 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.159136 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-kube-api-access-snc9h" (OuterVolumeSpecName: "kube-api-access-snc9h") pod "77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" (UID: "77086ddb-f1c4-4387-a1b3-a7b9389d4eb0"). InnerVolumeSpecName "kube-api-access-snc9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.245791 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snc9h\" (UniqueName: \"kubernetes.io/projected/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0-kube-api-access-snc9h\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.344969 4867 generic.go:334] "Generic (PLEG): container finished" podID="77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" containerID="2e95723f53dda67e6fbff64267cd010c5186a50cb85b25868b75a1965fb93aa5" exitCode=0 Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.345122 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwldn" event={"ID":"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0","Type":"ContainerDied","Data":"2e95723f53dda67e6fbff64267cd010c5186a50cb85b25868b75a1965fb93aa5"} Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.345152 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kwldn" Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.345229 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwldn" event={"ID":"77086ddb-f1c4-4387-a1b3-a7b9389d4eb0","Type":"ContainerDied","Data":"f81d9f3f5e58496407123bbe89b13b2f4384e5424f5ed4516e82d1a0c14bf576"} Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.345274 4867 scope.go:117] "RemoveContainer" containerID="2e95723f53dda67e6fbff64267cd010c5186a50cb85b25868b75a1965fb93aa5" Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.347604 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27437fd9-2bc5-48ac-9e34-e733da15dd2b","Type":"ContainerStarted","Data":"793cecbd56bcbcfca8f8a59fa74a8549ee89fe0d7b86bb7ed8129fcfe01fcb5d"} Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.352028 4867 generic.go:334] "Generic (PLEG): container finished" podID="323a0af9-9e80-476b-8315-e20a6dd41293" containerID="77228a0f066425d86bdda1aaf9057e24f843996dcb0f57300b551c06e527bd22" exitCode=0 Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.352096 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" event={"ID":"323a0af9-9e80-476b-8315-e20a6dd41293","Type":"ContainerDied","Data":"77228a0f066425d86bdda1aaf9057e24f843996dcb0f57300b551c06e527bd22"} Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.917426 4867 scope.go:117] "RemoveContainer" containerID="4f2b9b821ddbdb0d02349dca656687220bddd2bf4415503ac614d53801212cea" Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.935659 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kwldn"] Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.946800 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kwldn"] Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.956653 4867 scope.go:117] "RemoveContainer" containerID="5253dc65bb4e2a66c82df57f3fab0290cc8ebb76baf27354b2a9c4455891c781" Feb 14 04:36:43 crc kubenswrapper[4867]: I0214 04:36:43.998392 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:36:43 crc kubenswrapper[4867]: E0214 04:36:43.998957 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:36:44 crc kubenswrapper[4867]: I0214 04:36:44.038048 4867 scope.go:117] "RemoveContainer" containerID="2e95723f53dda67e6fbff64267cd010c5186a50cb85b25868b75a1965fb93aa5" Feb 14 04:36:44 crc kubenswrapper[4867]: E0214 04:36:44.038655 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e95723f53dda67e6fbff64267cd010c5186a50cb85b25868b75a1965fb93aa5\": container with ID starting with 2e95723f53dda67e6fbff64267cd010c5186a50cb85b25868b75a1965fb93aa5 not found: ID does not exist" containerID="2e95723f53dda67e6fbff64267cd010c5186a50cb85b25868b75a1965fb93aa5" Feb 14 04:36:44 crc kubenswrapper[4867]: I0214 04:36:44.038707 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e95723f53dda67e6fbff64267cd010c5186a50cb85b25868b75a1965fb93aa5"} err="failed to get container status \"2e95723f53dda67e6fbff64267cd010c5186a50cb85b25868b75a1965fb93aa5\": rpc error: code = NotFound desc = could not find container \"2e95723f53dda67e6fbff64267cd010c5186a50cb85b25868b75a1965fb93aa5\": container with ID starting with 2e95723f53dda67e6fbff64267cd010c5186a50cb85b25868b75a1965fb93aa5 not found: ID does not exist" Feb 14 04:36:44 crc kubenswrapper[4867]: I0214 04:36:44.038741 4867 scope.go:117] "RemoveContainer" containerID="4f2b9b821ddbdb0d02349dca656687220bddd2bf4415503ac614d53801212cea" Feb 14 04:36:44 crc kubenswrapper[4867]: E0214 04:36:44.039668 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f2b9b821ddbdb0d02349dca656687220bddd2bf4415503ac614d53801212cea\": container with ID starting with 4f2b9b821ddbdb0d02349dca656687220bddd2bf4415503ac614d53801212cea not found: ID does not exist" containerID="4f2b9b821ddbdb0d02349dca656687220bddd2bf4415503ac614d53801212cea" Feb 14 04:36:44 crc kubenswrapper[4867]: I0214 04:36:44.039748 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f2b9b821ddbdb0d02349dca656687220bddd2bf4415503ac614d53801212cea"} err="failed to get container status \"4f2b9b821ddbdb0d02349dca656687220bddd2bf4415503ac614d53801212cea\": rpc error: code = NotFound desc = could not find container \"4f2b9b821ddbdb0d02349dca656687220bddd2bf4415503ac614d53801212cea\": container with ID starting with 4f2b9b821ddbdb0d02349dca656687220bddd2bf4415503ac614d53801212cea not found: ID does not exist" Feb 14 04:36:44 crc kubenswrapper[4867]: I0214 04:36:44.039798 4867 scope.go:117] "RemoveContainer" containerID="5253dc65bb4e2a66c82df57f3fab0290cc8ebb76baf27354b2a9c4455891c781" Feb 14 04:36:44 crc kubenswrapper[4867]: E0214 04:36:44.040261 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5253dc65bb4e2a66c82df57f3fab0290cc8ebb76baf27354b2a9c4455891c781\": container with ID starting with 5253dc65bb4e2a66c82df57f3fab0290cc8ebb76baf27354b2a9c4455891c781 not found: ID does not exist" containerID="5253dc65bb4e2a66c82df57f3fab0290cc8ebb76baf27354b2a9c4455891c781" Feb 14 04:36:44 crc kubenswrapper[4867]: I0214 04:36:44.040306 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5253dc65bb4e2a66c82df57f3fab0290cc8ebb76baf27354b2a9c4455891c781"} err="failed to get container status \"5253dc65bb4e2a66c82df57f3fab0290cc8ebb76baf27354b2a9c4455891c781\": rpc error: code = NotFound desc = could not find container \"5253dc65bb4e2a66c82df57f3fab0290cc8ebb76baf27354b2a9c4455891c781\": container with ID starting with 5253dc65bb4e2a66c82df57f3fab0290cc8ebb76baf27354b2a9c4455891c781 not found: ID does not exist" Feb 14 04:36:44 crc kubenswrapper[4867]: E0214 04:36:44.340883 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="27437fd9-2bc5-48ac-9e34-e733da15dd2b" Feb 14 04:36:44 crc kubenswrapper[4867]: I0214 04:36:44.368861 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c","Type":"ContainerStarted","Data":"7079c60795ab2b59c2702098f1c0c9b2fdc7e32a70ad21a4cb53c2929c2218b6"} Feb 14 04:36:44 crc kubenswrapper[4867]: I0214 04:36:44.371313 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" event={"ID":"323a0af9-9e80-476b-8315-e20a6dd41293","Type":"ContainerStarted","Data":"807a72ee976321737b9888e2e6b03023367c7b0608270daa117db375e52e0e38"} Feb 14 04:36:44 crc kubenswrapper[4867]: I0214 04:36:44.371449 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:44 crc kubenswrapper[4867]: I0214 04:36:44.375652 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27437fd9-2bc5-48ac-9e34-e733da15dd2b","Type":"ContainerStarted","Data":"07c1546e9d32c390db109f6ed008be97ed287780d0e353ea325161c7f8bf4380"} Feb 14 04:36:44 crc kubenswrapper[4867]: I0214 04:36:44.375806 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 14 04:36:44 crc kubenswrapper[4867]: E0214 04:36:44.377229 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="27437fd9-2bc5-48ac-9e34-e733da15dd2b" Feb 14 04:36:44 crc kubenswrapper[4867]: I0214 04:36:44.377718 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"c8afa7ab-eaaa-4558-99d5-c655cf271f62","Type":"ContainerStarted","Data":"ad151054a2c473e2c8df602d26f12c713ca90442f0916e18cc8ecec85468a30c"} Feb 14 04:36:44 crc kubenswrapper[4867]: I0214 04:36:44.454599 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" podStartSLOduration=14.454570282 podStartE2EDuration="14.454570282s" podCreationTimestamp="2026-02-14 04:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:36:44.449845938 +0000 UTC m=+1636.530783252" watchObservedRunningTime="2026-02-14 04:36:44.454570282 +0000 UTC m=+1636.535507596" Feb 14 04:36:45 crc kubenswrapper[4867]: I0214 04:36:45.013791 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" path="/var/lib/kubelet/pods/77086ddb-f1c4-4387-a1b3-a7b9389d4eb0/volumes" Feb 14 04:36:45 crc kubenswrapper[4867]: E0214 04:36:45.394682 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="27437fd9-2bc5-48ac-9e34-e733da15dd2b" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.389871 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.499289 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc"] Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.499962 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" podUID="5971b677-9b43-4667-b205-3926975d03d8" containerName="dnsmasq-dns" containerID="cri-o://9ad581f44e041fa43febb489e9c262f745817b519d4f39097aec3abed254cc22" gracePeriod=10 Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.623817 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-tnn8p"] Feb 14 04:36:51 crc kubenswrapper[4867]: E0214 04:36:51.627400 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" containerName="extract-utilities" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.627441 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" containerName="extract-utilities" Feb 14 04:36:51 crc kubenswrapper[4867]: E0214 04:36:51.627460 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" containerName="registry-server" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.627468 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" containerName="registry-server" Feb 14 04:36:51 crc kubenswrapper[4867]: E0214 04:36:51.627493 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" containerName="extract-content" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.627502 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" containerName="extract-content" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.627831 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="77086ddb-f1c4-4387-a1b3-a7b9389d4eb0" containerName="registry-server" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.629826 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.655473 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-tnn8p"] Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.808637 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.808734 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxwtp\" (UniqueName: \"kubernetes.io/projected/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-kube-api-access-lxwtp\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.808846 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.808893 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.808935 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.808982 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.809081 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-config\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.910889 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxwtp\" (UniqueName: \"kubernetes.io/projected/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-kube-api-access-lxwtp\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.911277 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.911321 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.911349 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.911382 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.911457 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-config\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.911544 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.912948 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.913266 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.913336 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.913444 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.913496 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.913644 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-config\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:51 crc kubenswrapper[4867]: I0214 04:36:51.939179 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxwtp\" (UniqueName: \"kubernetes.io/projected/2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6-kube-api-access-lxwtp\") pod \"dnsmasq-dns-6f6df4f56c-tnn8p\" (UID: \"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6\") " pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.008798 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.172114 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.346942 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-dns-swift-storage-0\") pod \"5971b677-9b43-4667-b205-3926975d03d8\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.347027 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-ovsdbserver-sb\") pod \"5971b677-9b43-4667-b205-3926975d03d8\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.347167 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-dns-svc\") pod \"5971b677-9b43-4667-b205-3926975d03d8\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.347197 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnqpt\" (UniqueName: \"kubernetes.io/projected/5971b677-9b43-4667-b205-3926975d03d8-kube-api-access-wnqpt\") pod \"5971b677-9b43-4667-b205-3926975d03d8\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.347241 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-ovsdbserver-nb\") pod \"5971b677-9b43-4667-b205-3926975d03d8\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.347267 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-config\") pod \"5971b677-9b43-4667-b205-3926975d03d8\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.354633 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5971b677-9b43-4667-b205-3926975d03d8-kube-api-access-wnqpt" (OuterVolumeSpecName: "kube-api-access-wnqpt") pod "5971b677-9b43-4667-b205-3926975d03d8" (UID: "5971b677-9b43-4667-b205-3926975d03d8"). InnerVolumeSpecName "kube-api-access-wnqpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.431851 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-config" (OuterVolumeSpecName: "config") pod "5971b677-9b43-4667-b205-3926975d03d8" (UID: "5971b677-9b43-4667-b205-3926975d03d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.460556 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5971b677-9b43-4667-b205-3926975d03d8" (UID: "5971b677-9b43-4667-b205-3926975d03d8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.462113 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5971b677-9b43-4667-b205-3926975d03d8" (UID: "5971b677-9b43-4667-b205-3926975d03d8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.464632 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-ovsdbserver-sb\") pod \"5971b677-9b43-4667-b205-3926975d03d8\" (UID: \"5971b677-9b43-4667-b205-3926975d03d8\") " Feb 14 04:36:52 crc kubenswrapper[4867]: W0214 04:36:52.464750 4867 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/5971b677-9b43-4667-b205-3926975d03d8/volumes/kubernetes.io~configmap/ovsdbserver-sb Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.464768 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5971b677-9b43-4667-b205-3926975d03d8" (UID: "5971b677-9b43-4667-b205-3926975d03d8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.465732 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.465752 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnqpt\" (UniqueName: \"kubernetes.io/projected/5971b677-9b43-4667-b205-3926975d03d8-kube-api-access-wnqpt\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.465763 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.465771 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.469769 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5971b677-9b43-4667-b205-3926975d03d8" (UID: "5971b677-9b43-4667-b205-3926975d03d8"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.488150 4867 generic.go:334] "Generic (PLEG): container finished" podID="5971b677-9b43-4667-b205-3926975d03d8" containerID="9ad581f44e041fa43febb489e9c262f745817b519d4f39097aec3abed254cc22" exitCode=0 Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.488195 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" event={"ID":"5971b677-9b43-4667-b205-3926975d03d8","Type":"ContainerDied","Data":"9ad581f44e041fa43febb489e9c262f745817b519d4f39097aec3abed254cc22"} Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.488224 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" event={"ID":"5971b677-9b43-4667-b205-3926975d03d8","Type":"ContainerDied","Data":"3c342daaec09db1c73482280fce80173920eec884b7d07687fab104355216038"} Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.488244 4867 scope.go:117] "RemoveContainer" containerID="9ad581f44e041fa43febb489e9c262f745817b519d4f39097aec3abed254cc22" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.488409 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.493109 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5971b677-9b43-4667-b205-3926975d03d8" (UID: "5971b677-9b43-4667-b205-3926975d03d8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.528456 4867 scope.go:117] "RemoveContainer" containerID="6971374dbc010707ba6790cccdbab9a07aa3260bf64fef9946cb0b85383f3d5f" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.564561 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-tnn8p"] Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.565450 4867 scope.go:117] "RemoveContainer" containerID="9ad581f44e041fa43febb489e9c262f745817b519d4f39097aec3abed254cc22" Feb 14 04:36:52 crc kubenswrapper[4867]: E0214 04:36:52.566559 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ad581f44e041fa43febb489e9c262f745817b519d4f39097aec3abed254cc22\": container with ID starting with 9ad581f44e041fa43febb489e9c262f745817b519d4f39097aec3abed254cc22 not found: ID does not exist" containerID="9ad581f44e041fa43febb489e9c262f745817b519d4f39097aec3abed254cc22" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.566609 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ad581f44e041fa43febb489e9c262f745817b519d4f39097aec3abed254cc22"} err="failed to get container status \"9ad581f44e041fa43febb489e9c262f745817b519d4f39097aec3abed254cc22\": rpc error: code = NotFound desc = could not find container \"9ad581f44e041fa43febb489e9c262f745817b519d4f39097aec3abed254cc22\": container with ID starting with 9ad581f44e041fa43febb489e9c262f745817b519d4f39097aec3abed254cc22 not found: ID does not exist" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.566656 4867 scope.go:117] "RemoveContainer" containerID="6971374dbc010707ba6790cccdbab9a07aa3260bf64fef9946cb0b85383f3d5f" Feb 14 04:36:52 crc kubenswrapper[4867]: E0214 04:36:52.567017 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6971374dbc010707ba6790cccdbab9a07aa3260bf64fef9946cb0b85383f3d5f\": container with ID starting with 6971374dbc010707ba6790cccdbab9a07aa3260bf64fef9946cb0b85383f3d5f not found: ID does not exist" containerID="6971374dbc010707ba6790cccdbab9a07aa3260bf64fef9946cb0b85383f3d5f" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.567053 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6971374dbc010707ba6790cccdbab9a07aa3260bf64fef9946cb0b85383f3d5f"} err="failed to get container status \"6971374dbc010707ba6790cccdbab9a07aa3260bf64fef9946cb0b85383f3d5f\": rpc error: code = NotFound desc = could not find container \"6971374dbc010707ba6790cccdbab9a07aa3260bf64fef9946cb0b85383f3d5f\": container with ID starting with 6971374dbc010707ba6790cccdbab9a07aa3260bf64fef9946cb0b85383f3d5f not found: ID does not exist" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.567719 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.567757 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5971b677-9b43-4667-b205-3926975d03d8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 04:36:52 crc kubenswrapper[4867]: W0214 04:36:52.572718 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ff227b0_1fbd_4d96_9201_8ef0fb5a68a6.slice/crio-67971b1a8f5283abd5efe6a9728aa4c2c312c64324ccdc8ba5c46519a4943368 WatchSource:0}: Error finding container 67971b1a8f5283abd5efe6a9728aa4c2c312c64324ccdc8ba5c46519a4943368: Status 404 returned error can't find the container with id 67971b1a8f5283abd5efe6a9728aa4c2c312c64324ccdc8ba5c46519a4943368 Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.862588 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc"] Feb 14 04:36:52 crc kubenswrapper[4867]: I0214 04:36:52.882631 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-5cgsc"] Feb 14 04:36:53 crc kubenswrapper[4867]: I0214 04:36:53.018172 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5971b677-9b43-4667-b205-3926975d03d8" path="/var/lib/kubelet/pods/5971b677-9b43-4667-b205-3926975d03d8/volumes" Feb 14 04:36:53 crc kubenswrapper[4867]: I0214 04:36:53.513909 4867 generic.go:334] "Generic (PLEG): container finished" podID="2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6" containerID="91bff455b8941f7c61a76deb1385e22d412455f0a3376814dc857712292e4023" exitCode=0 Feb 14 04:36:53 crc kubenswrapper[4867]: I0214 04:36:53.513957 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" event={"ID":"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6","Type":"ContainerDied","Data":"91bff455b8941f7c61a76deb1385e22d412455f0a3376814dc857712292e4023"} Feb 14 04:36:53 crc kubenswrapper[4867]: I0214 04:36:53.513998 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" event={"ID":"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6","Type":"ContainerStarted","Data":"67971b1a8f5283abd5efe6a9728aa4c2c312c64324ccdc8ba5c46519a4943368"} Feb 14 04:36:54 crc kubenswrapper[4867]: I0214 04:36:54.528831 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" event={"ID":"2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6","Type":"ContainerStarted","Data":"83842c5661d318a59ff36083bda28f2f64ebeb6e3b1dc9f95877497a7d664886"} Feb 14 04:36:54 crc kubenswrapper[4867]: I0214 04:36:54.529085 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:36:54 crc kubenswrapper[4867]: I0214 04:36:54.559368 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" podStartSLOduration=3.559340609 podStartE2EDuration="3.559340609s" podCreationTimestamp="2026-02-14 04:36:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:36:54.549225714 +0000 UTC m=+1646.630163068" watchObservedRunningTime="2026-02-14 04:36:54.559340609 +0000 UTC m=+1646.640277923" Feb 14 04:36:55 crc kubenswrapper[4867]: I0214 04:36:55.543783 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-l8hr2" event={"ID":"632c48c8-f0d5-4dc9-823e-fa96b9265e97","Type":"ContainerStarted","Data":"de721f6c491679859a0694193254d070c18018a3dbb5ddc13f5e6825aefb8ef2"} Feb 14 04:36:55 crc kubenswrapper[4867]: I0214 04:36:55.566010 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-l8hr2" podStartSLOduration=2.098260968 podStartE2EDuration="42.565989488s" podCreationTimestamp="2026-02-14 04:36:13 +0000 UTC" firstStartedPulling="2026-02-14 04:36:14.718802448 +0000 UTC m=+1606.799739762" lastFinishedPulling="2026-02-14 04:36:55.186530968 +0000 UTC m=+1647.267468282" observedRunningTime="2026-02-14 04:36:55.562307891 +0000 UTC m=+1647.643245205" watchObservedRunningTime="2026-02-14 04:36:55.565989488 +0000 UTC m=+1647.646926822" Feb 14 04:36:58 crc kubenswrapper[4867]: I0214 04:36:58.580267 4867 generic.go:334] "Generic (PLEG): container finished" podID="632c48c8-f0d5-4dc9-823e-fa96b9265e97" containerID="de721f6c491679859a0694193254d070c18018a3dbb5ddc13f5e6825aefb8ef2" exitCode=0 Feb 14 04:36:58 crc kubenswrapper[4867]: I0214 04:36:58.580382 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-l8hr2" event={"ID":"632c48c8-f0d5-4dc9-823e-fa96b9265e97","Type":"ContainerDied","Data":"de721f6c491679859a0694193254d070c18018a3dbb5ddc13f5e6825aefb8ef2"} Feb 14 04:36:59 crc kubenswrapper[4867]: I0214 04:36:59.014656 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:36:59 crc kubenswrapper[4867]: E0214 04:36:59.014910 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:37:00 crc kubenswrapper[4867]: I0214 04:37:00.023516 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 14 04:37:00 crc kubenswrapper[4867]: I0214 04:37:00.134898 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-l8hr2" Feb 14 04:37:00 crc kubenswrapper[4867]: I0214 04:37:00.186293 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j82w7\" (UniqueName: \"kubernetes.io/projected/632c48c8-f0d5-4dc9-823e-fa96b9265e97-kube-api-access-j82w7\") pod \"632c48c8-f0d5-4dc9-823e-fa96b9265e97\" (UID: \"632c48c8-f0d5-4dc9-823e-fa96b9265e97\") " Feb 14 04:37:00 crc kubenswrapper[4867]: I0214 04:37:00.186527 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/632c48c8-f0d5-4dc9-823e-fa96b9265e97-config-data\") pod \"632c48c8-f0d5-4dc9-823e-fa96b9265e97\" (UID: \"632c48c8-f0d5-4dc9-823e-fa96b9265e97\") " Feb 14 04:37:00 crc kubenswrapper[4867]: I0214 04:37:00.186567 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/632c48c8-f0d5-4dc9-823e-fa96b9265e97-combined-ca-bundle\") pod \"632c48c8-f0d5-4dc9-823e-fa96b9265e97\" (UID: \"632c48c8-f0d5-4dc9-823e-fa96b9265e97\") " Feb 14 04:37:00 crc kubenswrapper[4867]: I0214 04:37:00.192292 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/632c48c8-f0d5-4dc9-823e-fa96b9265e97-kube-api-access-j82w7" (OuterVolumeSpecName: "kube-api-access-j82w7") pod "632c48c8-f0d5-4dc9-823e-fa96b9265e97" (UID: "632c48c8-f0d5-4dc9-823e-fa96b9265e97"). InnerVolumeSpecName "kube-api-access-j82w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:37:00 crc kubenswrapper[4867]: I0214 04:37:00.223136 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/632c48c8-f0d5-4dc9-823e-fa96b9265e97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "632c48c8-f0d5-4dc9-823e-fa96b9265e97" (UID: "632c48c8-f0d5-4dc9-823e-fa96b9265e97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:00 crc kubenswrapper[4867]: I0214 04:37:00.291914 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j82w7\" (UniqueName: \"kubernetes.io/projected/632c48c8-f0d5-4dc9-823e-fa96b9265e97-kube-api-access-j82w7\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:00 crc kubenswrapper[4867]: I0214 04:37:00.291988 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/632c48c8-f0d5-4dc9-823e-fa96b9265e97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:00 crc kubenswrapper[4867]: I0214 04:37:00.297877 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/632c48c8-f0d5-4dc9-823e-fa96b9265e97-config-data" (OuterVolumeSpecName: "config-data") pod "632c48c8-f0d5-4dc9-823e-fa96b9265e97" (UID: "632c48c8-f0d5-4dc9-823e-fa96b9265e97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:00 crc kubenswrapper[4867]: I0214 04:37:00.394259 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/632c48c8-f0d5-4dc9-823e-fa96b9265e97-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:00 crc kubenswrapper[4867]: I0214 04:37:00.612964 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27437fd9-2bc5-48ac-9e34-e733da15dd2b","Type":"ContainerStarted","Data":"86c896e795193cbc041ce48aa8f5cfb49ed56bfd923d3ce2eec001f309e51bd7"} Feb 14 04:37:00 crc kubenswrapper[4867]: I0214 04:37:00.614818 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-l8hr2" event={"ID":"632c48c8-f0d5-4dc9-823e-fa96b9265e97","Type":"ContainerDied","Data":"f6d7447bc4808aa0ae450dfc090bd3e6cef5e2bf5c0d0482fa7c73bb4eea0eab"} Feb 14 04:37:00 crc kubenswrapper[4867]: I0214 04:37:00.614879 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6d7447bc4808aa0ae450dfc090bd3e6cef5e2bf5c0d0482fa7c73bb4eea0eab" Feb 14 04:37:00 crc kubenswrapper[4867]: I0214 04:37:00.614909 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-l8hr2" Feb 14 04:37:00 crc kubenswrapper[4867]: I0214 04:37:00.642293 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.523163866 podStartE2EDuration="42.642270758s" podCreationTimestamp="2026-02-14 04:36:18 +0000 UTC" firstStartedPulling="2026-02-14 04:36:20.244783202 +0000 UTC m=+1612.325720516" lastFinishedPulling="2026-02-14 04:37:00.363890094 +0000 UTC m=+1652.444827408" observedRunningTime="2026-02-14 04:37:00.639596458 +0000 UTC m=+1652.720533772" watchObservedRunningTime="2026-02-14 04:37:00.642270758 +0000 UTC m=+1652.723208072" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.636959 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7b479dbc77-k8ts7"] Feb 14 04:37:01 crc kubenswrapper[4867]: E0214 04:37:01.637813 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5971b677-9b43-4667-b205-3926975d03d8" containerName="init" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.637828 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5971b677-9b43-4667-b205-3926975d03d8" containerName="init" Feb 14 04:37:01 crc kubenswrapper[4867]: E0214 04:37:01.637851 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="632c48c8-f0d5-4dc9-823e-fa96b9265e97" containerName="heat-db-sync" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.637857 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="632c48c8-f0d5-4dc9-823e-fa96b9265e97" containerName="heat-db-sync" Feb 14 04:37:01 crc kubenswrapper[4867]: E0214 04:37:01.637880 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5971b677-9b43-4667-b205-3926975d03d8" containerName="dnsmasq-dns" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.637887 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5971b677-9b43-4667-b205-3926975d03d8" containerName="dnsmasq-dns" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.638121 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="632c48c8-f0d5-4dc9-823e-fa96b9265e97" containerName="heat-db-sync" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.638156 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5971b677-9b43-4667-b205-3926975d03d8" containerName="dnsmasq-dns" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.639281 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7b479dbc77-k8ts7" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.660112 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7b479dbc77-k8ts7"] Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.706776 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-64c645895b-sclxg"] Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.709052 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.723669 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-64c645895b-sclxg"] Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.734756 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcce6a26-826f-4268-9007-2e3c4411450f-config-data\") pod \"heat-engine-7b479dbc77-k8ts7\" (UID: \"fcce6a26-826f-4268-9007-2e3c4411450f\") " pod="openstack/heat-engine-7b479dbc77-k8ts7" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.734805 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rhgq\" (UniqueName: \"kubernetes.io/projected/fcce6a26-826f-4268-9007-2e3c4411450f-kube-api-access-7rhgq\") pod \"heat-engine-7b479dbc77-k8ts7\" (UID: \"fcce6a26-826f-4268-9007-2e3c4411450f\") " pod="openstack/heat-engine-7b479dbc77-k8ts7" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.735141 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcce6a26-826f-4268-9007-2e3c4411450f-config-data-custom\") pod \"heat-engine-7b479dbc77-k8ts7\" (UID: \"fcce6a26-826f-4268-9007-2e3c4411450f\") " pod="openstack/heat-engine-7b479dbc77-k8ts7" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.735221 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcce6a26-826f-4268-9007-2e3c4411450f-combined-ca-bundle\") pod \"heat-engine-7b479dbc77-k8ts7\" (UID: \"fcce6a26-826f-4268-9007-2e3c4411450f\") " pod="openstack/heat-engine-7b479dbc77-k8ts7" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.741667 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-57b4cc7645-246cl"] Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.743558 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.767722 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-57b4cc7645-246cl"] Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.837904 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7996e855-fbe0-4324-a337-8841df83e714-internal-tls-certs\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.838060 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7996e855-fbe0-4324-a337-8841df83e714-public-tls-certs\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.838125 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d4f5bc-b41b-4f17-977e-d36995a99521-config-data\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.838208 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rhgq\" (UniqueName: \"kubernetes.io/projected/fcce6a26-826f-4268-9007-2e3c4411450f-kube-api-access-7rhgq\") pod \"heat-engine-7b479dbc77-k8ts7\" (UID: \"fcce6a26-826f-4268-9007-2e3c4411450f\") " pod="openstack/heat-engine-7b479dbc77-k8ts7" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.838236 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7996e855-fbe0-4324-a337-8841df83e714-config-data-custom\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.838257 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcce6a26-826f-4268-9007-2e3c4411450f-config-data\") pod \"heat-engine-7b479dbc77-k8ts7\" (UID: \"fcce6a26-826f-4268-9007-2e3c4411450f\") " pod="openstack/heat-engine-7b479dbc77-k8ts7" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.838282 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7996e855-fbe0-4324-a337-8841df83e714-config-data\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.838382 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d4f5bc-b41b-4f17-977e-d36995a99521-combined-ca-bundle\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.838574 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d4f5bc-b41b-4f17-977e-d36995a99521-public-tls-certs\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.838696 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs766\" (UniqueName: \"kubernetes.io/projected/24d4f5bc-b41b-4f17-977e-d36995a99521-kube-api-access-zs766\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.838726 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d4f5bc-b41b-4f17-977e-d36995a99521-internal-tls-certs\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.838781 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7996e855-fbe0-4324-a337-8841df83e714-combined-ca-bundle\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.838829 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcce6a26-826f-4268-9007-2e3c4411450f-config-data-custom\") pod \"heat-engine-7b479dbc77-k8ts7\" (UID: \"fcce6a26-826f-4268-9007-2e3c4411450f\") " pod="openstack/heat-engine-7b479dbc77-k8ts7" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.838891 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcce6a26-826f-4268-9007-2e3c4411450f-combined-ca-bundle\") pod \"heat-engine-7b479dbc77-k8ts7\" (UID: \"fcce6a26-826f-4268-9007-2e3c4411450f\") " pod="openstack/heat-engine-7b479dbc77-k8ts7" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.839104 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24d4f5bc-b41b-4f17-977e-d36995a99521-config-data-custom\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.839137 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcr9p\" (UniqueName: \"kubernetes.io/projected/7996e855-fbe0-4324-a337-8841df83e714-kube-api-access-fcr9p\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.844777 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcce6a26-826f-4268-9007-2e3c4411450f-combined-ca-bundle\") pod \"heat-engine-7b479dbc77-k8ts7\" (UID: \"fcce6a26-826f-4268-9007-2e3c4411450f\") " pod="openstack/heat-engine-7b479dbc77-k8ts7" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.846418 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcce6a26-826f-4268-9007-2e3c4411450f-config-data-custom\") pod \"heat-engine-7b479dbc77-k8ts7\" (UID: \"fcce6a26-826f-4268-9007-2e3c4411450f\") " pod="openstack/heat-engine-7b479dbc77-k8ts7" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.852585 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcce6a26-826f-4268-9007-2e3c4411450f-config-data\") pod \"heat-engine-7b479dbc77-k8ts7\" (UID: \"fcce6a26-826f-4268-9007-2e3c4411450f\") " pod="openstack/heat-engine-7b479dbc77-k8ts7" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.855903 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rhgq\" (UniqueName: \"kubernetes.io/projected/fcce6a26-826f-4268-9007-2e3c4411450f-kube-api-access-7rhgq\") pod \"heat-engine-7b479dbc77-k8ts7\" (UID: \"fcce6a26-826f-4268-9007-2e3c4411450f\") " pod="openstack/heat-engine-7b479dbc77-k8ts7" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.940971 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7996e855-fbe0-4324-a337-8841df83e714-config-data\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.941597 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d4f5bc-b41b-4f17-977e-d36995a99521-combined-ca-bundle\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.941673 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d4f5bc-b41b-4f17-977e-d36995a99521-public-tls-certs\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.941725 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs766\" (UniqueName: \"kubernetes.io/projected/24d4f5bc-b41b-4f17-977e-d36995a99521-kube-api-access-zs766\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.941747 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d4f5bc-b41b-4f17-977e-d36995a99521-internal-tls-certs\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.941773 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7996e855-fbe0-4324-a337-8841df83e714-combined-ca-bundle\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.941871 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24d4f5bc-b41b-4f17-977e-d36995a99521-config-data-custom\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.941950 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcr9p\" (UniqueName: \"kubernetes.io/projected/7996e855-fbe0-4324-a337-8841df83e714-kube-api-access-fcr9p\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.942145 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7996e855-fbe0-4324-a337-8841df83e714-internal-tls-certs\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.942192 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7996e855-fbe0-4324-a337-8841df83e714-public-tls-certs\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.942255 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d4f5bc-b41b-4f17-977e-d36995a99521-config-data\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.942304 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7996e855-fbe0-4324-a337-8841df83e714-config-data-custom\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.959007 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24d4f5bc-b41b-4f17-977e-d36995a99521-config-data-custom\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.964688 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24d4f5bc-b41b-4f17-977e-d36995a99521-config-data\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.964761 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d4f5bc-b41b-4f17-977e-d36995a99521-internal-tls-certs\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.965319 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7996e855-fbe0-4324-a337-8841df83e714-combined-ca-bundle\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.965591 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7996e855-fbe0-4324-a337-8841df83e714-public-tls-certs\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.965974 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7996e855-fbe0-4324-a337-8841df83e714-internal-tls-certs\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.966888 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24d4f5bc-b41b-4f17-977e-d36995a99521-combined-ca-bundle\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.967210 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24d4f5bc-b41b-4f17-977e-d36995a99521-public-tls-certs\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.967580 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7996e855-fbe0-4324-a337-8841df83e714-config-data\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.968238 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7996e855-fbe0-4324-a337-8841df83e714-config-data-custom\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.969448 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcr9p\" (UniqueName: \"kubernetes.io/projected/7996e855-fbe0-4324-a337-8841df83e714-kube-api-access-fcr9p\") pod \"heat-api-64c645895b-sclxg\" (UID: \"7996e855-fbe0-4324-a337-8841df83e714\") " pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.973898 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs766\" (UniqueName: \"kubernetes.io/projected/24d4f5bc-b41b-4f17-977e-d36995a99521-kube-api-access-zs766\") pod \"heat-cfnapi-57b4cc7645-246cl\" (UID: \"24d4f5bc-b41b-4f17-977e-d36995a99521\") " pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:01 crc kubenswrapper[4867]: I0214 04:37:01.975109 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7b479dbc77-k8ts7" Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.010677 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6df4f56c-tnn8p" Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.042260 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.088473 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.095388 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-prh4d"] Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.095660 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" podUID="323a0af9-9e80-476b-8315-e20a6dd41293" containerName="dnsmasq-dns" containerID="cri-o://807a72ee976321737b9888e2e6b03023367c7b0608270daa117db375e52e0e38" gracePeriod=10 Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.665108 4867 generic.go:334] "Generic (PLEG): container finished" podID="323a0af9-9e80-476b-8315-e20a6dd41293" containerID="807a72ee976321737b9888e2e6b03023367c7b0608270daa117db375e52e0e38" exitCode=0 Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.665805 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" event={"ID":"323a0af9-9e80-476b-8315-e20a6dd41293","Type":"ContainerDied","Data":"807a72ee976321737b9888e2e6b03023367c7b0608270daa117db375e52e0e38"} Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.690635 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7b479dbc77-k8ts7"] Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.731390 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.873864 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-dns-svc\") pod \"323a0af9-9e80-476b-8315-e20a6dd41293\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.876055 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-ovsdbserver-sb\") pod \"323a0af9-9e80-476b-8315-e20a6dd41293\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.876200 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-dns-swift-storage-0\") pod \"323a0af9-9e80-476b-8315-e20a6dd41293\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.876337 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-openstack-edpm-ipam\") pod \"323a0af9-9e80-476b-8315-e20a6dd41293\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.876451 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-ovsdbserver-nb\") pod \"323a0af9-9e80-476b-8315-e20a6dd41293\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.876639 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgct9\" (UniqueName: \"kubernetes.io/projected/323a0af9-9e80-476b-8315-e20a6dd41293-kube-api-access-lgct9\") pod \"323a0af9-9e80-476b-8315-e20a6dd41293\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.876801 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-config\") pod \"323a0af9-9e80-476b-8315-e20a6dd41293\" (UID: \"323a0af9-9e80-476b-8315-e20a6dd41293\") " Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.893829 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/323a0af9-9e80-476b-8315-e20a6dd41293-kube-api-access-lgct9" (OuterVolumeSpecName: "kube-api-access-lgct9") pod "323a0af9-9e80-476b-8315-e20a6dd41293" (UID: "323a0af9-9e80-476b-8315-e20a6dd41293"). InnerVolumeSpecName "kube-api-access-lgct9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:37:02 crc kubenswrapper[4867]: I0214 04:37:02.982171 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgct9\" (UniqueName: \"kubernetes.io/projected/323a0af9-9e80-476b-8315-e20a6dd41293-kube-api-access-lgct9\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.063387 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "323a0af9-9e80-476b-8315-e20a6dd41293" (UID: "323a0af9-9e80-476b-8315-e20a6dd41293"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.088410 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.122692 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-config" (OuterVolumeSpecName: "config") pod "323a0af9-9e80-476b-8315-e20a6dd41293" (UID: "323a0af9-9e80-476b-8315-e20a6dd41293"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.124617 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "323a0af9-9e80-476b-8315-e20a6dd41293" (UID: "323a0af9-9e80-476b-8315-e20a6dd41293"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.126985 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "323a0af9-9e80-476b-8315-e20a6dd41293" (UID: "323a0af9-9e80-476b-8315-e20a6dd41293"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.133958 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "323a0af9-9e80-476b-8315-e20a6dd41293" (UID: "323a0af9-9e80-476b-8315-e20a6dd41293"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.145778 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "323a0af9-9e80-476b-8315-e20a6dd41293" (UID: "323a0af9-9e80-476b-8315-e20a6dd41293"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.205989 4867 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-config\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.206025 4867 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.206083 4867 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.206107 4867 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.206117 4867 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/323a0af9-9e80-476b-8315-e20a6dd41293-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.214533 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-57b4cc7645-246cl"] Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.214592 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-64c645895b-sclxg"] Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.710646 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57b4cc7645-246cl" event={"ID":"24d4f5bc-b41b-4f17-977e-d36995a99521","Type":"ContainerStarted","Data":"3bb339c7fb5a9f6190d834f9e570b3b868e24aa60f5c4bad1436e0ef3d6f9efc"} Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.712278 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-64c645895b-sclxg" event={"ID":"7996e855-fbe0-4324-a337-8841df83e714","Type":"ContainerStarted","Data":"3982672d28d8f4fc4e1d43912a585753f73acb6860abc2d71be47146ae7fd801"} Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.716843 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" event={"ID":"323a0af9-9e80-476b-8315-e20a6dd41293","Type":"ContainerDied","Data":"09af16d3a20690cfc39a0ddb82488ac9f522f8bc29592b76f2d5e3c3d0549e4a"} Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.717029 4867 scope.go:117] "RemoveContainer" containerID="807a72ee976321737b9888e2e6b03023367c7b0608270daa117db375e52e0e38" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.717308 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-prh4d" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.724327 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7b479dbc77-k8ts7" event={"ID":"fcce6a26-826f-4268-9007-2e3c4411450f","Type":"ContainerStarted","Data":"28c1c18be8bc5af10b632138413323e95c7af925ff2cd4c90fd2806527ff9500"} Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.724373 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7b479dbc77-k8ts7" event={"ID":"fcce6a26-826f-4268-9007-2e3c4411450f","Type":"ContainerStarted","Data":"1f8d5d90d5f40764a034778f809df10a77a1f772ff13cee8c1701ef09bdbcdea"} Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.725875 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7b479dbc77-k8ts7" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.741421 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7b479dbc77-k8ts7" podStartSLOduration=2.741405224 podStartE2EDuration="2.741405224s" podCreationTimestamp="2026-02-14 04:37:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:37:03.739283428 +0000 UTC m=+1655.820220742" watchObservedRunningTime="2026-02-14 04:37:03.741405224 +0000 UTC m=+1655.822342538" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.760874 4867 scope.go:117] "RemoveContainer" containerID="77228a0f066425d86bdda1aaf9057e24f843996dcb0f57300b551c06e527bd22" Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.801846 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-prh4d"] Feb 14 04:37:03 crc kubenswrapper[4867]: I0214 04:37:03.812831 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-prh4d"] Feb 14 04:37:05 crc kubenswrapper[4867]: I0214 04:37:05.017309 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="323a0af9-9e80-476b-8315-e20a6dd41293" path="/var/lib/kubelet/pods/323a0af9-9e80-476b-8315-e20a6dd41293/volumes" Feb 14 04:37:05 crc kubenswrapper[4867]: I0214 04:37:05.758461 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57b4cc7645-246cl" event={"ID":"24d4f5bc-b41b-4f17-977e-d36995a99521","Type":"ContainerStarted","Data":"a34564f9258d21118b5a9fc47dbf271e87bce6c7fcb1542194d809e1a09780fc"} Feb 14 04:37:05 crc kubenswrapper[4867]: I0214 04:37:05.758830 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:05 crc kubenswrapper[4867]: I0214 04:37:05.764207 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-64c645895b-sclxg" event={"ID":"7996e855-fbe0-4324-a337-8841df83e714","Type":"ContainerStarted","Data":"7496fb13527df2a9a4ad391ac85f871999c3edeb59b8d116df71d91e7094c773"} Feb 14 04:37:05 crc kubenswrapper[4867]: I0214 04:37:05.764256 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:05 crc kubenswrapper[4867]: I0214 04:37:05.806275 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-57b4cc7645-246cl" podStartSLOduration=2.890746538 podStartE2EDuration="4.806241124s" podCreationTimestamp="2026-02-14 04:37:01 +0000 UTC" firstStartedPulling="2026-02-14 04:37:03.078746854 +0000 UTC m=+1655.159684168" lastFinishedPulling="2026-02-14 04:37:04.99424144 +0000 UTC m=+1657.075178754" observedRunningTime="2026-02-14 04:37:05.790822329 +0000 UTC m=+1657.871759643" watchObservedRunningTime="2026-02-14 04:37:05.806241124 +0000 UTC m=+1657.887178428" Feb 14 04:37:05 crc kubenswrapper[4867]: I0214 04:37:05.817646 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-64c645895b-sclxg" podStartSLOduration=2.870956568 podStartE2EDuration="4.817620623s" podCreationTimestamp="2026-02-14 04:37:01 +0000 UTC" firstStartedPulling="2026-02-14 04:37:03.064098729 +0000 UTC m=+1655.145036043" lastFinishedPulling="2026-02-14 04:37:05.010762784 +0000 UTC m=+1657.091700098" observedRunningTime="2026-02-14 04:37:05.813908166 +0000 UTC m=+1657.894845480" watchObservedRunningTime="2026-02-14 04:37:05.817620623 +0000 UTC m=+1657.898557937" Feb 14 04:37:10 crc kubenswrapper[4867]: I0214 04:37:10.998050 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:37:11 crc kubenswrapper[4867]: E0214 04:37:10.999326 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:37:11 crc kubenswrapper[4867]: I0214 04:37:11.266016 4867 scope.go:117] "RemoveContainer" containerID="6388af96a9e8cd26ae554c99b13aa233ce10e1dc8de2f02a6f674fb4e51e6bd3" Feb 14 04:37:11 crc kubenswrapper[4867]: I0214 04:37:11.330870 4867 scope.go:117] "RemoveContainer" containerID="69544341c5ca0c8dd1de9f8750f822d8a653543dcc8f00f4deed22c84b48df5d" Feb 14 04:37:11 crc kubenswrapper[4867]: I0214 04:37:11.376139 4867 scope.go:117] "RemoveContainer" containerID="15ed364b0a49f81fd4949fca04378cd1d1cf5fcd161d0b8180bec6ace68b75fa" Feb 14 04:37:11 crc kubenswrapper[4867]: I0214 04:37:11.406032 4867 scope.go:117] "RemoveContainer" containerID="f89dad4a87be20772a4f4fed951cb674eab08ab883a7cf25710c335ef40caf93" Feb 14 04:37:11 crc kubenswrapper[4867]: I0214 04:37:11.462468 4867 scope.go:117] "RemoveContainer" containerID="509fb90f4e6334b9685b885ef46fd5f42dffc3b95cc1b48b90fc4906b6403562" Feb 14 04:37:13 crc kubenswrapper[4867]: I0214 04:37:13.538479 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-64c645895b-sclxg" Feb 14 04:37:13 crc kubenswrapper[4867]: I0214 04:37:13.547782 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-57b4cc7645-246cl" Feb 14 04:37:13 crc kubenswrapper[4867]: I0214 04:37:13.652745 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6f55d59bf5-wfw72"] Feb 14 04:37:13 crc kubenswrapper[4867]: I0214 04:37:13.652986 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-6f55d59bf5-wfw72" podUID="fe0cc502-2f6a-41d9-8761-da930802201e" containerName="heat-api" containerID="cri-o://c20166d492a9fa577b96f854901f8b51fbf65bf3bdc78203cf773c6d9899e503" gracePeriod=60 Feb 14 04:37:13 crc kubenswrapper[4867]: I0214 04:37:13.681789 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-74d8ffb764-wz9cp"] Feb 14 04:37:13 crc kubenswrapper[4867]: I0214 04:37:13.682019 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" podUID="16f76a07-1b4d-4057-84c6-0cae915e01f7" containerName="heat-cfnapi" containerID="cri-o://11c8bf6db3fba0102b4b30e1ce307cf289b32ee921d87494ebf82f97afd541e7" gracePeriod=60 Feb 14 04:37:16 crc kubenswrapper[4867]: I0214 04:37:16.931755 4867 generic.go:334] "Generic (PLEG): container finished" podID="16f76a07-1b4d-4057-84c6-0cae915e01f7" containerID="11c8bf6db3fba0102b4b30e1ce307cf289b32ee921d87494ebf82f97afd541e7" exitCode=0 Feb 14 04:37:16 crc kubenswrapper[4867]: I0214 04:37:16.932300 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" event={"ID":"16f76a07-1b4d-4057-84c6-0cae915e01f7","Type":"ContainerDied","Data":"11c8bf6db3fba0102b4b30e1ce307cf289b32ee921d87494ebf82f97afd541e7"} Feb 14 04:37:16 crc kubenswrapper[4867]: I0214 04:37:16.937924 4867 generic.go:334] "Generic (PLEG): container finished" podID="c8afa7ab-eaaa-4558-99d5-c655cf271f62" containerID="ad151054a2c473e2c8df602d26f12c713ca90442f0916e18cc8ecec85468a30c" exitCode=0 Feb 14 04:37:16 crc kubenswrapper[4867]: I0214 04:37:16.937988 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"c8afa7ab-eaaa-4558-99d5-c655cf271f62","Type":"ContainerDied","Data":"ad151054a2c473e2c8df602d26f12c713ca90442f0916e18cc8ecec85468a30c"} Feb 14 04:37:16 crc kubenswrapper[4867]: I0214 04:37:16.941531 4867 generic.go:334] "Generic (PLEG): container finished" podID="0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c" containerID="7079c60795ab2b59c2702098f1c0c9b2fdc7e32a70ad21a4cb53c2929c2218b6" exitCode=0 Feb 14 04:37:16 crc kubenswrapper[4867]: I0214 04:37:16.941612 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c","Type":"ContainerDied","Data":"7079c60795ab2b59c2702098f1c0c9b2fdc7e32a70ad21a4cb53c2929c2218b6"} Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.420332 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.466278 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-internal-tls-certs\") pod \"16f76a07-1b4d-4057-84c6-0cae915e01f7\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.466384 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-config-data-custom\") pod \"16f76a07-1b4d-4057-84c6-0cae915e01f7\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.466484 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m47wg\" (UniqueName: \"kubernetes.io/projected/16f76a07-1b4d-4057-84c6-0cae915e01f7-kube-api-access-m47wg\") pod \"16f76a07-1b4d-4057-84c6-0cae915e01f7\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.466550 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-combined-ca-bundle\") pod \"16f76a07-1b4d-4057-84c6-0cae915e01f7\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.466574 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-config-data\") pod \"16f76a07-1b4d-4057-84c6-0cae915e01f7\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.466617 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-public-tls-certs\") pod \"16f76a07-1b4d-4057-84c6-0cae915e01f7\" (UID: \"16f76a07-1b4d-4057-84c6-0cae915e01f7\") " Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.483793 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "16f76a07-1b4d-4057-84c6-0cae915e01f7" (UID: "16f76a07-1b4d-4057-84c6-0cae915e01f7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.486777 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16f76a07-1b4d-4057-84c6-0cae915e01f7-kube-api-access-m47wg" (OuterVolumeSpecName: "kube-api-access-m47wg") pod "16f76a07-1b4d-4057-84c6-0cae915e01f7" (UID: "16f76a07-1b4d-4057-84c6-0cae915e01f7"). InnerVolumeSpecName "kube-api-access-m47wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.568647 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m47wg\" (UniqueName: \"kubernetes.io/projected/16f76a07-1b4d-4057-84c6-0cae915e01f7-kube-api-access-m47wg\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.568683 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.583808 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16f76a07-1b4d-4057-84c6-0cae915e01f7" (UID: "16f76a07-1b4d-4057-84c6-0cae915e01f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.610650 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-config-data" (OuterVolumeSpecName: "config-data") pod "16f76a07-1b4d-4057-84c6-0cae915e01f7" (UID: "16f76a07-1b4d-4057-84c6-0cae915e01f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.626708 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "16f76a07-1b4d-4057-84c6-0cae915e01f7" (UID: "16f76a07-1b4d-4057-84c6-0cae915e01f7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.679912 4867 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.679947 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.679958 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.709802 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "16f76a07-1b4d-4057-84c6-0cae915e01f7" (UID: "16f76a07-1b4d-4057-84c6-0cae915e01f7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.762975 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.796594 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-config-data-custom\") pod \"fe0cc502-2f6a-41d9-8761-da930802201e\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.796656 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-internal-tls-certs\") pod \"fe0cc502-2f6a-41d9-8761-da930802201e\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.796707 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-config-data\") pod \"fe0cc502-2f6a-41d9-8761-da930802201e\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.796747 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-combined-ca-bundle\") pod \"fe0cc502-2f6a-41d9-8761-da930802201e\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.796851 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-public-tls-certs\") pod \"fe0cc502-2f6a-41d9-8761-da930802201e\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.796900 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vf86p\" (UniqueName: \"kubernetes.io/projected/fe0cc502-2f6a-41d9-8761-da930802201e-kube-api-access-vf86p\") pod \"fe0cc502-2f6a-41d9-8761-da930802201e\" (UID: \"fe0cc502-2f6a-41d9-8761-da930802201e\") " Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.797305 4867 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16f76a07-1b4d-4057-84c6-0cae915e01f7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.806246 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fe0cc502-2f6a-41d9-8761-da930802201e" (UID: "fe0cc502-2f6a-41d9-8761-da930802201e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.807491 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe0cc502-2f6a-41d9-8761-da930802201e-kube-api-access-vf86p" (OuterVolumeSpecName: "kube-api-access-vf86p") pod "fe0cc502-2f6a-41d9-8761-da930802201e" (UID: "fe0cc502-2f6a-41d9-8761-da930802201e"). InnerVolumeSpecName "kube-api-access-vf86p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.870703 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe0cc502-2f6a-41d9-8761-da930802201e" (UID: "fe0cc502-2f6a-41d9-8761-da930802201e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.901153 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.901477 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.901490 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vf86p\" (UniqueName: \"kubernetes.io/projected/fe0cc502-2f6a-41d9-8761-da930802201e-kube-api-access-vf86p\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.919603 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-config-data" (OuterVolumeSpecName: "config-data") pod "fe0cc502-2f6a-41d9-8761-da930802201e" (UID: "fe0cc502-2f6a-41d9-8761-da930802201e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.922601 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fe0cc502-2f6a-41d9-8761-da930802201e" (UID: "fe0cc502-2f6a-41d9-8761-da930802201e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.929840 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fe0cc502-2f6a-41d9-8761-da930802201e" (UID: "fe0cc502-2f6a-41d9-8761-da930802201e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.956907 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" event={"ID":"16f76a07-1b4d-4057-84c6-0cae915e01f7","Type":"ContainerDied","Data":"830da82a952f0eb79b72815166e8401af585ff9b46564a5260025bbc1ac28ad6"} Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.956966 4867 scope.go:117] "RemoveContainer" containerID="11c8bf6db3fba0102b4b30e1ce307cf289b32ee921d87494ebf82f97afd541e7" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.957121 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-74d8ffb764-wz9cp" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.962737 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"c8afa7ab-eaaa-4558-99d5-c655cf271f62","Type":"ContainerStarted","Data":"512f2e34264f13e3be24daf121ee312c117a304ab4942efe8423bc47687661a4"} Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.963495 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.978878 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c","Type":"ContainerStarted","Data":"f9320adf76aa40017005d159c959a51602c8dbc3b8224d7010e1c233cc75e4a0"} Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.979940 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.981312 4867 generic.go:334] "Generic (PLEG): container finished" podID="fe0cc502-2f6a-41d9-8761-da930802201e" containerID="c20166d492a9fa577b96f854901f8b51fbf65bf3bdc78203cf773c6d9899e503" exitCode=0 Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.981332 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f55d59bf5-wfw72" event={"ID":"fe0cc502-2f6a-41d9-8761-da930802201e","Type":"ContainerDied","Data":"c20166d492a9fa577b96f854901f8b51fbf65bf3bdc78203cf773c6d9899e503"} Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.981347 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f55d59bf5-wfw72" event={"ID":"fe0cc502-2f6a-41d9-8761-da930802201e","Type":"ContainerDied","Data":"049d086e76bc10d2a5f14c7d8a9fe02a2d5fd8eadb747b6e9d8413f65e7ceb0e"} Feb 14 04:37:17 crc kubenswrapper[4867]: I0214 04:37:17.981389 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f55d59bf5-wfw72" Feb 14 04:37:18 crc kubenswrapper[4867]: I0214 04:37:18.010963 4867 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:18 crc kubenswrapper[4867]: I0214 04:37:18.011000 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:18 crc kubenswrapper[4867]: I0214 04:37:18.011013 4867 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe0cc502-2f6a-41d9-8761-da930802201e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:18 crc kubenswrapper[4867]: I0214 04:37:18.021423 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=44.021403859 podStartE2EDuration="44.021403859s" podCreationTimestamp="2026-02-14 04:36:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:37:18.016699475 +0000 UTC m=+1670.097636789" watchObservedRunningTime="2026-02-14 04:37:18.021403859 +0000 UTC m=+1670.102341173" Feb 14 04:37:18 crc kubenswrapper[4867]: I0214 04:37:18.064270 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=44.064252105 podStartE2EDuration="44.064252105s" podCreationTimestamp="2026-02-14 04:36:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:37:18.053858822 +0000 UTC m=+1670.134796136" watchObservedRunningTime="2026-02-14 04:37:18.064252105 +0000 UTC m=+1670.145189419" Feb 14 04:37:18 crc kubenswrapper[4867]: I0214 04:37:18.149306 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-74d8ffb764-wz9cp"] Feb 14 04:37:18 crc kubenswrapper[4867]: I0214 04:37:18.166589 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-74d8ffb764-wz9cp"] Feb 14 04:37:18 crc kubenswrapper[4867]: I0214 04:37:18.179779 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6f55d59bf5-wfw72"] Feb 14 04:37:18 crc kubenswrapper[4867]: I0214 04:37:18.196807 4867 scope.go:117] "RemoveContainer" containerID="c20166d492a9fa577b96f854901f8b51fbf65bf3bdc78203cf773c6d9899e503" Feb 14 04:37:18 crc kubenswrapper[4867]: I0214 04:37:18.203198 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6f55d59bf5-wfw72"] Feb 14 04:37:18 crc kubenswrapper[4867]: I0214 04:37:18.255326 4867 scope.go:117] "RemoveContainer" containerID="c20166d492a9fa577b96f854901f8b51fbf65bf3bdc78203cf773c6d9899e503" Feb 14 04:37:18 crc kubenswrapper[4867]: E0214 04:37:18.258087 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c20166d492a9fa577b96f854901f8b51fbf65bf3bdc78203cf773c6d9899e503\": container with ID starting with c20166d492a9fa577b96f854901f8b51fbf65bf3bdc78203cf773c6d9899e503 not found: ID does not exist" containerID="c20166d492a9fa577b96f854901f8b51fbf65bf3bdc78203cf773c6d9899e503" Feb 14 04:37:18 crc kubenswrapper[4867]: I0214 04:37:18.258141 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c20166d492a9fa577b96f854901f8b51fbf65bf3bdc78203cf773c6d9899e503"} err="failed to get container status \"c20166d492a9fa577b96f854901f8b51fbf65bf3bdc78203cf773c6d9899e503\": rpc error: code = NotFound desc = could not find container \"c20166d492a9fa577b96f854901f8b51fbf65bf3bdc78203cf773c6d9899e503\": container with ID starting with c20166d492a9fa577b96f854901f8b51fbf65bf3bdc78203cf773c6d9899e503 not found: ID does not exist" Feb 14 04:37:19 crc kubenswrapper[4867]: I0214 04:37:19.126128 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16f76a07-1b4d-4057-84c6-0cae915e01f7" path="/var/lib/kubelet/pods/16f76a07-1b4d-4057-84c6-0cae915e01f7/volumes" Feb 14 04:37:19 crc kubenswrapper[4867]: I0214 04:37:19.127578 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe0cc502-2f6a-41d9-8761-da930802201e" path="/var/lib/kubelet/pods/fe0cc502-2f6a-41d9-8761-da930802201e/volumes" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.532030 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf"] Feb 14 04:37:21 crc kubenswrapper[4867]: E0214 04:37:21.532586 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe0cc502-2f6a-41d9-8761-da930802201e" containerName="heat-api" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.532601 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe0cc502-2f6a-41d9-8761-da930802201e" containerName="heat-api" Feb 14 04:37:21 crc kubenswrapper[4867]: E0214 04:37:21.532622 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="323a0af9-9e80-476b-8315-e20a6dd41293" containerName="dnsmasq-dns" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.532628 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="323a0af9-9e80-476b-8315-e20a6dd41293" containerName="dnsmasq-dns" Feb 14 04:37:21 crc kubenswrapper[4867]: E0214 04:37:21.532639 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16f76a07-1b4d-4057-84c6-0cae915e01f7" containerName="heat-cfnapi" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.532646 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="16f76a07-1b4d-4057-84c6-0cae915e01f7" containerName="heat-cfnapi" Feb 14 04:37:21 crc kubenswrapper[4867]: E0214 04:37:21.532674 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="323a0af9-9e80-476b-8315-e20a6dd41293" containerName="init" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.532680 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="323a0af9-9e80-476b-8315-e20a6dd41293" containerName="init" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.532899 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe0cc502-2f6a-41d9-8761-da930802201e" containerName="heat-api" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.532910 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="16f76a07-1b4d-4057-84c6-0cae915e01f7" containerName="heat-cfnapi" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.532927 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="323a0af9-9e80-476b-8315-e20a6dd41293" containerName="dnsmasq-dns" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.533763 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.536409 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.537350 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.537888 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.537994 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.588880 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf"] Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.602757 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf\" (UID: \"51f6e45c-a545-4b49-b6f8-a3048619f24d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.602943 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtch4\" (UniqueName: \"kubernetes.io/projected/51f6e45c-a545-4b49-b6f8-a3048619f24d-kube-api-access-mtch4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf\" (UID: \"51f6e45c-a545-4b49-b6f8-a3048619f24d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.602969 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf\" (UID: \"51f6e45c-a545-4b49-b6f8-a3048619f24d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.603039 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf\" (UID: \"51f6e45c-a545-4b49-b6f8-a3048619f24d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.704919 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf\" (UID: \"51f6e45c-a545-4b49-b6f8-a3048619f24d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.705450 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtch4\" (UniqueName: \"kubernetes.io/projected/51f6e45c-a545-4b49-b6f8-a3048619f24d-kube-api-access-mtch4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf\" (UID: \"51f6e45c-a545-4b49-b6f8-a3048619f24d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.705484 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf\" (UID: \"51f6e45c-a545-4b49-b6f8-a3048619f24d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.706002 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf\" (UID: \"51f6e45c-a545-4b49-b6f8-a3048619f24d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.739176 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf\" (UID: \"51f6e45c-a545-4b49-b6f8-a3048619f24d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.739185 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf\" (UID: \"51f6e45c-a545-4b49-b6f8-a3048619f24d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.745260 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf\" (UID: \"51f6e45c-a545-4b49-b6f8-a3048619f24d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.749682 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtch4\" (UniqueName: \"kubernetes.io/projected/51f6e45c-a545-4b49-b6f8-a3048619f24d-kube-api-access-mtch4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf\" (UID: \"51f6e45c-a545-4b49-b6f8-a3048619f24d\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" Feb 14 04:37:21 crc kubenswrapper[4867]: I0214 04:37:21.874084 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" Feb 14 04:37:22 crc kubenswrapper[4867]: I0214 04:37:22.036128 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7b479dbc77-k8ts7" Feb 14 04:37:22 crc kubenswrapper[4867]: I0214 04:37:22.167261 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7797898b6d-54xz8"] Feb 14 04:37:22 crc kubenswrapper[4867]: I0214 04:37:22.167614 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-7797898b6d-54xz8" podUID="7535f37c-f2f6-4e75-bfa2-48211fe86ef6" containerName="heat-engine" containerID="cri-o://f9f2e84685b68ba026ed32e937f1e9734f0455c3a2cb5f5a9465424b4369a198" gracePeriod=60 Feb 14 04:37:22 crc kubenswrapper[4867]: I0214 04:37:22.997833 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:37:22 crc kubenswrapper[4867]: E0214 04:37:22.998549 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:37:23 crc kubenswrapper[4867]: W0214 04:37:23.040946 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51f6e45c_a545_4b49_b6f8_a3048619f24d.slice/crio-1ed5f62b1367ab5d606495b7d287f182fb33167f5f1ae1565d0110ed63160b24 WatchSource:0}: Error finding container 1ed5f62b1367ab5d606495b7d287f182fb33167f5f1ae1565d0110ed63160b24: Status 404 returned error can't find the container with id 1ed5f62b1367ab5d606495b7d287f182fb33167f5f1ae1565d0110ed63160b24 Feb 14 04:37:23 crc kubenswrapper[4867]: I0214 04:37:23.061143 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf"] Feb 14 04:37:23 crc kubenswrapper[4867]: I0214 04:37:23.949013 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-dnl28"] Feb 14 04:37:23 crc kubenswrapper[4867]: I0214 04:37:23.966729 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-dnl28"] Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.041355 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-vgdj4"] Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.042997 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-vgdj4" Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.046134 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.060285 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-vgdj4"] Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.068427 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" event={"ID":"51f6e45c-a545-4b49-b6f8-a3048619f24d","Type":"ContainerStarted","Data":"1ed5f62b1367ab5d606495b7d287f182fb33167f5f1ae1565d0110ed63160b24"} Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.167533 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-combined-ca-bundle\") pod \"aodh-db-sync-vgdj4\" (UID: \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\") " pod="openstack/aodh-db-sync-vgdj4" Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.167657 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-config-data\") pod \"aodh-db-sync-vgdj4\" (UID: \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\") " pod="openstack/aodh-db-sync-vgdj4" Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.167742 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-scripts\") pod \"aodh-db-sync-vgdj4\" (UID: \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\") " pod="openstack/aodh-db-sync-vgdj4" Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.167918 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zc22\" (UniqueName: \"kubernetes.io/projected/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-kube-api-access-4zc22\") pod \"aodh-db-sync-vgdj4\" (UID: \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\") " pod="openstack/aodh-db-sync-vgdj4" Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.270044 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-scripts\") pod \"aodh-db-sync-vgdj4\" (UID: \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\") " pod="openstack/aodh-db-sync-vgdj4" Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.270180 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zc22\" (UniqueName: \"kubernetes.io/projected/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-kube-api-access-4zc22\") pod \"aodh-db-sync-vgdj4\" (UID: \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\") " pod="openstack/aodh-db-sync-vgdj4" Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.270278 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-combined-ca-bundle\") pod \"aodh-db-sync-vgdj4\" (UID: \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\") " pod="openstack/aodh-db-sync-vgdj4" Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.270311 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-config-data\") pod \"aodh-db-sync-vgdj4\" (UID: \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\") " pod="openstack/aodh-db-sync-vgdj4" Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.277119 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-scripts\") pod \"aodh-db-sync-vgdj4\" (UID: \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\") " pod="openstack/aodh-db-sync-vgdj4" Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.277308 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-combined-ca-bundle\") pod \"aodh-db-sync-vgdj4\" (UID: \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\") " pod="openstack/aodh-db-sync-vgdj4" Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.277705 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-config-data\") pod \"aodh-db-sync-vgdj4\" (UID: \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\") " pod="openstack/aodh-db-sync-vgdj4" Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.292807 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zc22\" (UniqueName: \"kubernetes.io/projected/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-kube-api-access-4zc22\") pod \"aodh-db-sync-vgdj4\" (UID: \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\") " pod="openstack/aodh-db-sync-vgdj4" Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.385839 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-vgdj4" Feb 14 04:37:24 crc kubenswrapper[4867]: I0214 04:37:24.898071 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-vgdj4"] Feb 14 04:37:25 crc kubenswrapper[4867]: I0214 04:37:25.025854 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df373c99-9a99-4793-90ef-3ad7887e5e3e" path="/var/lib/kubelet/pods/df373c99-9a99-4793-90ef-3ad7887e5e3e/volumes" Feb 14 04:37:25 crc kubenswrapper[4867]: I0214 04:37:25.086735 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-vgdj4" event={"ID":"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf","Type":"ContainerStarted","Data":"c5f61f015fd804d2bc75796f49931d65d47ad38dbb5345ac1f23a25001f8039b"} Feb 14 04:37:29 crc kubenswrapper[4867]: E0214 04:37:29.984569 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f9f2e84685b68ba026ed32e937f1e9734f0455c3a2cb5f5a9465424b4369a198" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 14 04:37:29 crc kubenswrapper[4867]: E0214 04:37:29.986686 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f9f2e84685b68ba026ed32e937f1e9734f0455c3a2cb5f5a9465424b4369a198" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 14 04:37:29 crc kubenswrapper[4867]: E0214 04:37:29.988952 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f9f2e84685b68ba026ed32e937f1e9734f0455c3a2cb5f5a9465424b4369a198" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Feb 14 04:37:29 crc kubenswrapper[4867]: E0214 04:37:29.989025 4867 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7797898b6d-54xz8" podUID="7535f37c-f2f6-4e75-bfa2-48211fe86ef6" containerName="heat-engine" Feb 14 04:37:33 crc kubenswrapper[4867]: I0214 04:37:33.356546 4867 generic.go:334] "Generic (PLEG): container finished" podID="7535f37c-f2f6-4e75-bfa2-48211fe86ef6" containerID="f9f2e84685b68ba026ed32e937f1e9734f0455c3a2cb5f5a9465424b4369a198" exitCode=0 Feb 14 04:37:33 crc kubenswrapper[4867]: I0214 04:37:33.356765 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7797898b6d-54xz8" event={"ID":"7535f37c-f2f6-4e75-bfa2-48211fe86ef6","Type":"ContainerDied","Data":"f9f2e84685b68ba026ed32e937f1e9734f0455c3a2cb5f5a9465424b4369a198"} Feb 14 04:37:33 crc kubenswrapper[4867]: I0214 04:37:33.998609 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:37:33 crc kubenswrapper[4867]: E0214 04:37:33.999328 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:37:34 crc kubenswrapper[4867]: I0214 04:37:34.712323 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Feb 14 04:37:34 crc kubenswrapper[4867]: I0214 04:37:34.784010 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 04:37:34 crc kubenswrapper[4867]: I0214 04:37:34.979785 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 14 04:37:38 crc kubenswrapper[4867]: E0214 04:37:38.920201 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Feb 14 04:37:38 crc kubenswrapper[4867]: E0214 04:37:38.920834 4867 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 14 04:37:38 crc kubenswrapper[4867]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Feb 14 04:37:38 crc kubenswrapper[4867]: - hosts: all Feb 14 04:37:38 crc kubenswrapper[4867]: strategy: linear Feb 14 04:37:38 crc kubenswrapper[4867]: tasks: Feb 14 04:37:38 crc kubenswrapper[4867]: - name: Enable podified-repos Feb 14 04:37:38 crc kubenswrapper[4867]: become: true Feb 14 04:37:38 crc kubenswrapper[4867]: ansible.builtin.shell: | Feb 14 04:37:38 crc kubenswrapper[4867]: set -euxo pipefail Feb 14 04:37:38 crc kubenswrapper[4867]: pushd /var/tmp Feb 14 04:37:38 crc kubenswrapper[4867]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Feb 14 04:37:38 crc kubenswrapper[4867]: pushd repo-setup-main Feb 14 04:37:38 crc kubenswrapper[4867]: python3 -m venv ./venv Feb 14 04:37:38 crc kubenswrapper[4867]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Feb 14 04:37:38 crc kubenswrapper[4867]: ./venv/bin/repo-setup current-podified -b antelope Feb 14 04:37:38 crc kubenswrapper[4867]: popd Feb 14 04:37:38 crc kubenswrapper[4867]: rm -rf repo-setup-main Feb 14 04:37:38 crc kubenswrapper[4867]: Feb 14 04:37:38 crc kubenswrapper[4867]: Feb 14 04:37:38 crc kubenswrapper[4867]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Feb 14 04:37:38 crc kubenswrapper[4867]: edpm_override_hosts: openstack-edpm-ipam Feb 14 04:37:38 crc kubenswrapper[4867]: edpm_service_type: repo-setup Feb 14 04:37:38 crc kubenswrapper[4867]: Feb 14 04:37:38 crc kubenswrapper[4867]: Feb 14 04:37:38 crc kubenswrapper[4867]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-edpm-ipam,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-edpm-ipam,SubPath:ssh_key_openstack-edpm-ipam,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mtch4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf_openstack(51f6e45c-a545-4b49-b6f8-a3048619f24d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Feb 14 04:37:38 crc kubenswrapper[4867]: > logger="UnhandledError" Feb 14 04:37:38 crc kubenswrapper[4867]: E0214 04:37:38.922017 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" podUID="51f6e45c-a545-4b49-b6f8-a3048619f24d" Feb 14 04:37:39 crc kubenswrapper[4867]: I0214 04:37:39.033853 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 14 04:37:39 crc kubenswrapper[4867]: E0214 04:37:39.453187 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" podUID="51f6e45c-a545-4b49-b6f8-a3048619f24d" Feb 14 04:37:39 crc kubenswrapper[4867]: I0214 04:37:39.570191 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:37:39 crc kubenswrapper[4867]: I0214 04:37:39.688725 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-config-data\") pod \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\" (UID: \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\") " Feb 14 04:37:39 crc kubenswrapper[4867]: I0214 04:37:39.688849 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-combined-ca-bundle\") pod \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\" (UID: \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\") " Feb 14 04:37:39 crc kubenswrapper[4867]: I0214 04:37:39.689037 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-config-data-custom\") pod \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\" (UID: \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\") " Feb 14 04:37:39 crc kubenswrapper[4867]: I0214 04:37:39.689136 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpdz2\" (UniqueName: \"kubernetes.io/projected/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-kube-api-access-rpdz2\") pod \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\" (UID: \"7535f37c-f2f6-4e75-bfa2-48211fe86ef6\") " Feb 14 04:37:39 crc kubenswrapper[4867]: I0214 04:37:39.699986 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7535f37c-f2f6-4e75-bfa2-48211fe86ef6" (UID: "7535f37c-f2f6-4e75-bfa2-48211fe86ef6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:39 crc kubenswrapper[4867]: I0214 04:37:39.701929 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-kube-api-access-rpdz2" (OuterVolumeSpecName: "kube-api-access-rpdz2") pod "7535f37c-f2f6-4e75-bfa2-48211fe86ef6" (UID: "7535f37c-f2f6-4e75-bfa2-48211fe86ef6"). InnerVolumeSpecName "kube-api-access-rpdz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:37:39 crc kubenswrapper[4867]: I0214 04:37:39.735425 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7535f37c-f2f6-4e75-bfa2-48211fe86ef6" (UID: "7535f37c-f2f6-4e75-bfa2-48211fe86ef6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:39 crc kubenswrapper[4867]: I0214 04:37:39.758836 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-config-data" (OuterVolumeSpecName: "config-data") pod "7535f37c-f2f6-4e75-bfa2-48211fe86ef6" (UID: "7535f37c-f2f6-4e75-bfa2-48211fe86ef6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:39 crc kubenswrapper[4867]: I0214 04:37:39.791940 4867 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:39 crc kubenswrapper[4867]: I0214 04:37:39.791987 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpdz2\" (UniqueName: \"kubernetes.io/projected/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-kube-api-access-rpdz2\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:39 crc kubenswrapper[4867]: I0214 04:37:39.792002 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:39 crc kubenswrapper[4867]: I0214 04:37:39.792012 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7535f37c-f2f6-4e75-bfa2-48211fe86ef6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:39 crc kubenswrapper[4867]: I0214 04:37:39.908527 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="6bc83863-74f4-4509-969c-0f3305a542a8" containerName="rabbitmq" containerID="cri-o://88c159d1a43dc50e68ca5c624034eb8becafe830a496b5d85f7c11e183f4f8b3" gracePeriod=604795 Feb 14 04:37:40 crc kubenswrapper[4867]: I0214 04:37:40.462651 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-vgdj4" event={"ID":"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf","Type":"ContainerStarted","Data":"fe59d6a45b3b1f49664971d341b7fc6d30fef719063bc033373a5e6d9bd21e9a"} Feb 14 04:37:40 crc kubenswrapper[4867]: I0214 04:37:40.465108 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7797898b6d-54xz8" event={"ID":"7535f37c-f2f6-4e75-bfa2-48211fe86ef6","Type":"ContainerDied","Data":"dd3c354011933e0f94727b4d8a7a0061c7e339109544dc62c211e6c435dc4d43"} Feb 14 04:37:40 crc kubenswrapper[4867]: I0214 04:37:40.465159 4867 scope.go:117] "RemoveContainer" containerID="f9f2e84685b68ba026ed32e937f1e9734f0455c3a2cb5f5a9465424b4369a198" Feb 14 04:37:40 crc kubenswrapper[4867]: I0214 04:37:40.465193 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7797898b6d-54xz8" Feb 14 04:37:40 crc kubenswrapper[4867]: I0214 04:37:40.508089 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7797898b6d-54xz8"] Feb 14 04:37:40 crc kubenswrapper[4867]: I0214 04:37:40.533339 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-7797898b6d-54xz8"] Feb 14 04:37:41 crc kubenswrapper[4867]: I0214 04:37:41.033092 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7535f37c-f2f6-4e75-bfa2-48211fe86ef6" path="/var/lib/kubelet/pods/7535f37c-f2f6-4e75-bfa2-48211fe86ef6/volumes" Feb 14 04:37:41 crc kubenswrapper[4867]: I0214 04:37:41.505630 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-vgdj4" podStartSLOduration=3.390382844 podStartE2EDuration="17.505611729s" podCreationTimestamp="2026-02-14 04:37:24 +0000 UTC" firstStartedPulling="2026-02-14 04:37:24.909723679 +0000 UTC m=+1676.990660993" lastFinishedPulling="2026-02-14 04:37:39.024952564 +0000 UTC m=+1691.105889878" observedRunningTime="2026-02-14 04:37:41.503989187 +0000 UTC m=+1693.584926501" watchObservedRunningTime="2026-02-14 04:37:41.505611729 +0000 UTC m=+1693.586549043" Feb 14 04:37:44 crc kubenswrapper[4867]: I0214 04:37:44.519293 4867 generic.go:334] "Generic (PLEG): container finished" podID="844735e8-e1c8-426f-8f5b-ce4f64e2ffbf" containerID="fe59d6a45b3b1f49664971d341b7fc6d30fef719063bc033373a5e6d9bd21e9a" exitCode=0 Feb 14 04:37:44 crc kubenswrapper[4867]: I0214 04:37:44.519404 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-vgdj4" event={"ID":"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf","Type":"ContainerDied","Data":"fe59d6a45b3b1f49664971d341b7fc6d30fef719063bc033373a5e6d9bd21e9a"} Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.074154 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-vgdj4" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.166011 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zc22\" (UniqueName: \"kubernetes.io/projected/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-kube-api-access-4zc22\") pod \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\" (UID: \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\") " Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.166201 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-config-data\") pod \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\" (UID: \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\") " Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.166236 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-combined-ca-bundle\") pod \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\" (UID: \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\") " Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.166467 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-scripts\") pod \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\" (UID: \"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf\") " Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.175027 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-scripts" (OuterVolumeSpecName: "scripts") pod "844735e8-e1c8-426f-8f5b-ce4f64e2ffbf" (UID: "844735e8-e1c8-426f-8f5b-ce4f64e2ffbf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.176334 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-kube-api-access-4zc22" (OuterVolumeSpecName: "kube-api-access-4zc22") pod "844735e8-e1c8-426f-8f5b-ce4f64e2ffbf" (UID: "844735e8-e1c8-426f-8f5b-ce4f64e2ffbf"). InnerVolumeSpecName "kube-api-access-4zc22". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.206050 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-config-data" (OuterVolumeSpecName: "config-data") pod "844735e8-e1c8-426f-8f5b-ce4f64e2ffbf" (UID: "844735e8-e1c8-426f-8f5b-ce4f64e2ffbf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.211616 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "844735e8-e1c8-426f-8f5b-ce4f64e2ffbf" (UID: "844735e8-e1c8-426f-8f5b-ce4f64e2ffbf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.269077 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zc22\" (UniqueName: \"kubernetes.io/projected/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-kube-api-access-4zc22\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.269471 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.269486 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.269498 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.551368 4867 generic.go:334] "Generic (PLEG): container finished" podID="6bc83863-74f4-4509-969c-0f3305a542a8" containerID="88c159d1a43dc50e68ca5c624034eb8becafe830a496b5d85f7c11e183f4f8b3" exitCode=0 Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.551479 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"6bc83863-74f4-4509-969c-0f3305a542a8","Type":"ContainerDied","Data":"88c159d1a43dc50e68ca5c624034eb8becafe830a496b5d85f7c11e183f4f8b3"} Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.560075 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-vgdj4" event={"ID":"844735e8-e1c8-426f-8f5b-ce4f64e2ffbf","Type":"ContainerDied","Data":"c5f61f015fd804d2bc75796f49931d65d47ad38dbb5345ac1f23a25001f8039b"} Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.560117 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5f61f015fd804d2bc75796f49931d65d47ad38dbb5345ac1f23a25001f8039b" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.560182 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-vgdj4" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.701094 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.783610 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-server-conf\") pod \"6bc83863-74f4-4509-969c-0f3305a542a8\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.783672 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-294tk\" (UniqueName: \"kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-kube-api-access-294tk\") pod \"6bc83863-74f4-4509-969c-0f3305a542a8\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.783699 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-plugins\") pod \"6bc83863-74f4-4509-969c-0f3305a542a8\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.783813 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-confd\") pod \"6bc83863-74f4-4509-969c-0f3305a542a8\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.784478 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64ab6375-8d81-46bd-80ba-b738c813923f\") pod \"6bc83863-74f4-4509-969c-0f3305a542a8\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.784580 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-plugins-conf\") pod \"6bc83863-74f4-4509-969c-0f3305a542a8\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.784621 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6bc83863-74f4-4509-969c-0f3305a542a8-erlang-cookie-secret\") pod \"6bc83863-74f4-4509-969c-0f3305a542a8\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.784645 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-tls\") pod \"6bc83863-74f4-4509-969c-0f3305a542a8\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.784695 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-config-data\") pod \"6bc83863-74f4-4509-969c-0f3305a542a8\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.784764 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-erlang-cookie\") pod \"6bc83863-74f4-4509-969c-0f3305a542a8\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.784793 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6bc83863-74f4-4509-969c-0f3305a542a8-pod-info\") pod \"6bc83863-74f4-4509-969c-0f3305a542a8\" (UID: \"6bc83863-74f4-4509-969c-0f3305a542a8\") " Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.787252 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "6bc83863-74f4-4509-969c-0f3305a542a8" (UID: "6bc83863-74f4-4509-969c-0f3305a542a8"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.787904 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "6bc83863-74f4-4509-969c-0f3305a542a8" (UID: "6bc83863-74f4-4509-969c-0f3305a542a8"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.790843 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "6bc83863-74f4-4509-969c-0f3305a542a8" (UID: "6bc83863-74f4-4509-969c-0f3305a542a8"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.791844 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-kube-api-access-294tk" (OuterVolumeSpecName: "kube-api-access-294tk") pod "6bc83863-74f4-4509-969c-0f3305a542a8" (UID: "6bc83863-74f4-4509-969c-0f3305a542a8"). InnerVolumeSpecName "kube-api-access-294tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.802173 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/6bc83863-74f4-4509-969c-0f3305a542a8-pod-info" (OuterVolumeSpecName: "pod-info") pod "6bc83863-74f4-4509-969c-0f3305a542a8" (UID: "6bc83863-74f4-4509-969c-0f3305a542a8"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.803600 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bc83863-74f4-4509-969c-0f3305a542a8-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "6bc83863-74f4-4509-969c-0f3305a542a8" (UID: "6bc83863-74f4-4509-969c-0f3305a542a8"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.804152 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "6bc83863-74f4-4509-969c-0f3305a542a8" (UID: "6bc83863-74f4-4509-969c-0f3305a542a8"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.836051 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-config-data" (OuterVolumeSpecName: "config-data") pod "6bc83863-74f4-4509-969c-0f3305a542a8" (UID: "6bc83863-74f4-4509-969c-0f3305a542a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.873718 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64ab6375-8d81-46bd-80ba-b738c813923f" (OuterVolumeSpecName: "persistence") pod "6bc83863-74f4-4509-969c-0f3305a542a8" (UID: "6bc83863-74f4-4509-969c-0f3305a542a8"). InnerVolumeSpecName "pvc-64ab6375-8d81-46bd-80ba-b738c813923f". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.880054 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-server-conf" (OuterVolumeSpecName: "server-conf") pod "6bc83863-74f4-4509-969c-0f3305a542a8" (UID: "6bc83863-74f4-4509-969c-0f3305a542a8"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.889238 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-64ab6375-8d81-46bd-80ba-b738c813923f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64ab6375-8d81-46bd-80ba-b738c813923f\") on node \"crc\" " Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.889268 4867 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.889280 4867 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6bc83863-74f4-4509-969c-0f3305a542a8-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.889289 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.889297 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.889306 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.889315 4867 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6bc83863-74f4-4509-969c-0f3305a542a8-pod-info\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.889327 4867 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6bc83863-74f4-4509-969c-0f3305a542a8-server-conf\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.889336 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-294tk\" (UniqueName: \"kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-kube-api-access-294tk\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.889344 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.954357 4867 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.954670 4867 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-64ab6375-8d81-46bd-80ba-b738c813923f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64ab6375-8d81-46bd-80ba-b738c813923f") on node "crc" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.963188 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.966030 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="58861691-18ee-408e-9b79-b12a411e99d0" containerName="aodh-api" containerID="cri-o://4f9fbe8278c2f8217fd9d1c65cfa1d016b54bc10a1b47dd522ac53e2da5bac45" gracePeriod=30 Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.966704 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="58861691-18ee-408e-9b79-b12a411e99d0" containerName="aodh-listener" containerID="cri-o://27e1492030b12bf8e17f8ae9468e42331d9cc302f11974a5a0fc14d2d151ad95" gracePeriod=30 Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.966771 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="58861691-18ee-408e-9b79-b12a411e99d0" containerName="aodh-notifier" containerID="cri-o://57c262920dac84f166643430c62b34648c079ac3eb2252d50e804a444b3475ef" gracePeriod=30 Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.966814 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="58861691-18ee-408e-9b79-b12a411e99d0" containerName="aodh-evaluator" containerID="cri-o://a6c180f71636733ac3331112696898cf83a02e4f76f35724da02b3fc7166a0be" gracePeriod=30 Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.993540 4867 reconciler_common.go:293] "Volume detached for volume \"pvc-64ab6375-8d81-46bd-80ba-b738c813923f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64ab6375-8d81-46bd-80ba-b738c813923f\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:46 crc kubenswrapper[4867]: I0214 04:37:46.998968 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:37:46 crc kubenswrapper[4867]: E0214 04:37:46.999225 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.043757 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "6bc83863-74f4-4509-969c-0f3305a542a8" (UID: "6bc83863-74f4-4509-969c-0f3305a542a8"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.095725 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6bc83863-74f4-4509-969c-0f3305a542a8-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.573178 4867 generic.go:334] "Generic (PLEG): container finished" podID="58861691-18ee-408e-9b79-b12a411e99d0" containerID="a6c180f71636733ac3331112696898cf83a02e4f76f35724da02b3fc7166a0be" exitCode=0 Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.573477 4867 generic.go:334] "Generic (PLEG): container finished" podID="58861691-18ee-408e-9b79-b12a411e99d0" containerID="4f9fbe8278c2f8217fd9d1c65cfa1d016b54bc10a1b47dd522ac53e2da5bac45" exitCode=0 Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.573261 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"58861691-18ee-408e-9b79-b12a411e99d0","Type":"ContainerDied","Data":"a6c180f71636733ac3331112696898cf83a02e4f76f35724da02b3fc7166a0be"} Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.573562 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"58861691-18ee-408e-9b79-b12a411e99d0","Type":"ContainerDied","Data":"4f9fbe8278c2f8217fd9d1c65cfa1d016b54bc10a1b47dd522ac53e2da5bac45"} Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.575786 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"6bc83863-74f4-4509-969c-0f3305a542a8","Type":"ContainerDied","Data":"6d2235a75be13119e9c9aa74a5f3a2e2f13d32b41febb3b537fd57f955f1f8bc"} Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.575821 4867 scope.go:117] "RemoveContainer" containerID="88c159d1a43dc50e68ca5c624034eb8becafe830a496b5d85f7c11e183f4f8b3" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.575977 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.612131 4867 scope.go:117] "RemoveContainer" containerID="da72547c3496fadaa474b36d059bf8582881ee27c6b6aa73c9aa360c8e76f26d" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.616552 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.634544 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.656768 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 04:37:47 crc kubenswrapper[4867]: E0214 04:37:47.657418 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7535f37c-f2f6-4e75-bfa2-48211fe86ef6" containerName="heat-engine" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.657445 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7535f37c-f2f6-4e75-bfa2-48211fe86ef6" containerName="heat-engine" Feb 14 04:37:47 crc kubenswrapper[4867]: E0214 04:37:47.657463 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bc83863-74f4-4509-969c-0f3305a542a8" containerName="rabbitmq" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.657469 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bc83863-74f4-4509-969c-0f3305a542a8" containerName="rabbitmq" Feb 14 04:37:47 crc kubenswrapper[4867]: E0214 04:37:47.657484 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="844735e8-e1c8-426f-8f5b-ce4f64e2ffbf" containerName="aodh-db-sync" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.657491 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="844735e8-e1c8-426f-8f5b-ce4f64e2ffbf" containerName="aodh-db-sync" Feb 14 04:37:47 crc kubenswrapper[4867]: E0214 04:37:47.657526 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bc83863-74f4-4509-969c-0f3305a542a8" containerName="setup-container" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.657532 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bc83863-74f4-4509-969c-0f3305a542a8" containerName="setup-container" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.657786 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bc83863-74f4-4509-969c-0f3305a542a8" containerName="rabbitmq" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.657820 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="7535f37c-f2f6-4e75-bfa2-48211fe86ef6" containerName="heat-engine" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.657844 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="844735e8-e1c8-426f-8f5b-ce4f64e2ffbf" containerName="aodh-db-sync" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.659493 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.683449 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.709402 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/82f2a63e-b256-4ad7-96ee-1def8a174cfb-pod-info\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.709455 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/82f2a63e-b256-4ad7-96ee-1def8a174cfb-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.709486 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/82f2a63e-b256-4ad7-96ee-1def8a174cfb-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.709730 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkd5b\" (UniqueName: \"kubernetes.io/projected/82f2a63e-b256-4ad7-96ee-1def8a174cfb-kube-api-access-xkd5b\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.709864 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/82f2a63e-b256-4ad7-96ee-1def8a174cfb-server-conf\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.709932 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/82f2a63e-b256-4ad7-96ee-1def8a174cfb-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.710035 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/82f2a63e-b256-4ad7-96ee-1def8a174cfb-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.710093 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/82f2a63e-b256-4ad7-96ee-1def8a174cfb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.710124 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-64ab6375-8d81-46bd-80ba-b738c813923f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64ab6375-8d81-46bd-80ba-b738c813923f\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.710259 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/82f2a63e-b256-4ad7-96ee-1def8a174cfb-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.710283 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/82f2a63e-b256-4ad7-96ee-1def8a174cfb-config-data\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.812625 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/82f2a63e-b256-4ad7-96ee-1def8a174cfb-pod-info\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.812704 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/82f2a63e-b256-4ad7-96ee-1def8a174cfb-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.812751 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/82f2a63e-b256-4ad7-96ee-1def8a174cfb-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.812803 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkd5b\" (UniqueName: \"kubernetes.io/projected/82f2a63e-b256-4ad7-96ee-1def8a174cfb-kube-api-access-xkd5b\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.812867 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/82f2a63e-b256-4ad7-96ee-1def8a174cfb-server-conf\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.812914 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/82f2a63e-b256-4ad7-96ee-1def8a174cfb-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.812983 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/82f2a63e-b256-4ad7-96ee-1def8a174cfb-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.813052 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/82f2a63e-b256-4ad7-96ee-1def8a174cfb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.813099 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-64ab6375-8d81-46bd-80ba-b738c813923f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64ab6375-8d81-46bd-80ba-b738c813923f\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.813172 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/82f2a63e-b256-4ad7-96ee-1def8a174cfb-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.813209 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/82f2a63e-b256-4ad7-96ee-1def8a174cfb-config-data\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.813804 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/82f2a63e-b256-4ad7-96ee-1def8a174cfb-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.813924 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/82f2a63e-b256-4ad7-96ee-1def8a174cfb-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.814401 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/82f2a63e-b256-4ad7-96ee-1def8a174cfb-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.814606 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/82f2a63e-b256-4ad7-96ee-1def8a174cfb-server-conf\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.814724 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/82f2a63e-b256-4ad7-96ee-1def8a174cfb-config-data\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.815490 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.815551 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-64ab6375-8d81-46bd-80ba-b738c813923f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64ab6375-8d81-46bd-80ba-b738c813923f\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/55ff7cc17667ae9e120da2b34de2e1baed28e5c0bfceac7c1699349f36759e58/globalmount\"" pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.821006 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/82f2a63e-b256-4ad7-96ee-1def8a174cfb-pod-info\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.823278 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/82f2a63e-b256-4ad7-96ee-1def8a174cfb-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.823353 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/82f2a63e-b256-4ad7-96ee-1def8a174cfb-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.823568 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/82f2a63e-b256-4ad7-96ee-1def8a174cfb-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.841281 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkd5b\" (UniqueName: \"kubernetes.io/projected/82f2a63e-b256-4ad7-96ee-1def8a174cfb-kube-api-access-xkd5b\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.891678 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-64ab6375-8d81-46bd-80ba-b738c813923f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-64ab6375-8d81-46bd-80ba-b738c813923f\") pod \"rabbitmq-server-1\" (UID: \"82f2a63e-b256-4ad7-96ee-1def8a174cfb\") " pod="openstack/rabbitmq-server-1" Feb 14 04:37:47 crc kubenswrapper[4867]: I0214 04:37:47.978350 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Feb 14 04:37:48 crc kubenswrapper[4867]: I0214 04:37:48.580286 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Feb 14 04:37:48 crc kubenswrapper[4867]: W0214 04:37:48.584718 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82f2a63e_b256_4ad7_96ee_1def8a174cfb.slice/crio-59012886a85bf863af669f0867fcced616071f97cfdf03bf6796c95d85bbae24 WatchSource:0}: Error finding container 59012886a85bf863af669f0867fcced616071f97cfdf03bf6796c95d85bbae24: Status 404 returned error can't find the container with id 59012886a85bf863af669f0867fcced616071f97cfdf03bf6796c95d85bbae24 Feb 14 04:37:49 crc kubenswrapper[4867]: I0214 04:37:49.015007 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bc83863-74f4-4509-969c-0f3305a542a8" path="/var/lib/kubelet/pods/6bc83863-74f4-4509-969c-0f3305a542a8/volumes" Feb 14 04:37:49 crc kubenswrapper[4867]: I0214 04:37:49.606927 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"82f2a63e-b256-4ad7-96ee-1def8a174cfb","Type":"ContainerStarted","Data":"59012886a85bf863af669f0867fcced616071f97cfdf03bf6796c95d85bbae24"} Feb 14 04:37:50 crc kubenswrapper[4867]: I0214 04:37:50.619352 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"82f2a63e-b256-4ad7-96ee-1def8a174cfb","Type":"ContainerStarted","Data":"0c997e7bc3d5f543f14547386fa8ede76fc6a555faa3b09cca505eba9cd2af8d"} Feb 14 04:37:51 crc kubenswrapper[4867]: I0214 04:37:51.653599 4867 generic.go:334] "Generic (PLEG): container finished" podID="58861691-18ee-408e-9b79-b12a411e99d0" containerID="27e1492030b12bf8e17f8ae9468e42331d9cc302f11974a5a0fc14d2d151ad95" exitCode=0 Feb 14 04:37:51 crc kubenswrapper[4867]: I0214 04:37:51.655497 4867 generic.go:334] "Generic (PLEG): container finished" podID="58861691-18ee-408e-9b79-b12a411e99d0" containerID="57c262920dac84f166643430c62b34648c079ac3eb2252d50e804a444b3475ef" exitCode=0 Feb 14 04:37:51 crc kubenswrapper[4867]: I0214 04:37:51.658294 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"58861691-18ee-408e-9b79-b12a411e99d0","Type":"ContainerDied","Data":"27e1492030b12bf8e17f8ae9468e42331d9cc302f11974a5a0fc14d2d151ad95"} Feb 14 04:37:51 crc kubenswrapper[4867]: I0214 04:37:51.658904 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"58861691-18ee-408e-9b79-b12a411e99d0","Type":"ContainerDied","Data":"57c262920dac84f166643430c62b34648c079ac3eb2252d50e804a444b3475ef"} Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.079392 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.153352 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-public-tls-certs\") pod \"58861691-18ee-408e-9b79-b12a411e99d0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.153633 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-scripts\") pod \"58861691-18ee-408e-9b79-b12a411e99d0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.154362 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-internal-tls-certs\") pod \"58861691-18ee-408e-9b79-b12a411e99d0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.154481 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m47lq\" (UniqueName: \"kubernetes.io/projected/58861691-18ee-408e-9b79-b12a411e99d0-kube-api-access-m47lq\") pod \"58861691-18ee-408e-9b79-b12a411e99d0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.154747 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-config-data\") pod \"58861691-18ee-408e-9b79-b12a411e99d0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.155201 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-combined-ca-bundle\") pod \"58861691-18ee-408e-9b79-b12a411e99d0\" (UID: \"58861691-18ee-408e-9b79-b12a411e99d0\") " Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.169551 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-scripts" (OuterVolumeSpecName: "scripts") pod "58861691-18ee-408e-9b79-b12a411e99d0" (UID: "58861691-18ee-408e-9b79-b12a411e99d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.169682 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58861691-18ee-408e-9b79-b12a411e99d0-kube-api-access-m47lq" (OuterVolumeSpecName: "kube-api-access-m47lq") pod "58861691-18ee-408e-9b79-b12a411e99d0" (UID: "58861691-18ee-408e-9b79-b12a411e99d0"). InnerVolumeSpecName "kube-api-access-m47lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.277568 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m47lq\" (UniqueName: \"kubernetes.io/projected/58861691-18ee-408e-9b79-b12a411e99d0-kube-api-access-m47lq\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.277943 4867 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-scripts\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.280786 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "58861691-18ee-408e-9b79-b12a411e99d0" (UID: "58861691-18ee-408e-9b79-b12a411e99d0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.292768 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "58861691-18ee-408e-9b79-b12a411e99d0" (UID: "58861691-18ee-408e-9b79-b12a411e99d0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.338118 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58861691-18ee-408e-9b79-b12a411e99d0" (UID: "58861691-18ee-408e-9b79-b12a411e99d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.354592 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-config-data" (OuterVolumeSpecName: "config-data") pod "58861691-18ee-408e-9b79-b12a411e99d0" (UID: "58861691-18ee-408e-9b79-b12a411e99d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.382260 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.382351 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.382370 4867 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.382385 4867 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58861691-18ee-408e-9b79-b12a411e99d0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.681200 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.680958 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"58861691-18ee-408e-9b79-b12a411e99d0","Type":"ContainerDied","Data":"cc6bfc1f8b14bfadc90bd97fe9104d42e32da1b206a8c9f9b7d46cb64815cc9b"} Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.681297 4867 scope.go:117] "RemoveContainer" containerID="27e1492030b12bf8e17f8ae9468e42331d9cc302f11974a5a0fc14d2d151ad95" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.752488 4867 scope.go:117] "RemoveContainer" containerID="57c262920dac84f166643430c62b34648c079ac3eb2252d50e804a444b3475ef" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.765068 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.786715 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.815608 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Feb 14 04:37:52 crc kubenswrapper[4867]: E0214 04:37:52.816281 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58861691-18ee-408e-9b79-b12a411e99d0" containerName="aodh-listener" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.816308 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="58861691-18ee-408e-9b79-b12a411e99d0" containerName="aodh-listener" Feb 14 04:37:52 crc kubenswrapper[4867]: E0214 04:37:52.816329 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58861691-18ee-408e-9b79-b12a411e99d0" containerName="aodh-evaluator" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.816338 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="58861691-18ee-408e-9b79-b12a411e99d0" containerName="aodh-evaluator" Feb 14 04:37:52 crc kubenswrapper[4867]: E0214 04:37:52.816368 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58861691-18ee-408e-9b79-b12a411e99d0" containerName="aodh-api" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.816376 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="58861691-18ee-408e-9b79-b12a411e99d0" containerName="aodh-api" Feb 14 04:37:52 crc kubenswrapper[4867]: E0214 04:37:52.816392 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58861691-18ee-408e-9b79-b12a411e99d0" containerName="aodh-notifier" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.816400 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="58861691-18ee-408e-9b79-b12a411e99d0" containerName="aodh-notifier" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.816712 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="58861691-18ee-408e-9b79-b12a411e99d0" containerName="aodh-evaluator" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.816742 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="58861691-18ee-408e-9b79-b12a411e99d0" containerName="aodh-listener" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.816763 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="58861691-18ee-408e-9b79-b12a411e99d0" containerName="aodh-api" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.816781 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="58861691-18ee-408e-9b79-b12a411e99d0" containerName="aodh-notifier" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.819724 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.831212 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.831598 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.831774 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.831917 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.832066 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-bzvlt" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.871021 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.885755 4867 scope.go:117] "RemoveContainer" containerID="a6c180f71636733ac3331112696898cf83a02e4f76f35724da02b3fc7166a0be" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.899498 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4jf4\" (UniqueName: \"kubernetes.io/projected/532a3c72-e995-4be9-a7db-f288b6c1a311-kube-api-access-b4jf4\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.899705 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/532a3c72-e995-4be9-a7db-f288b6c1a311-combined-ca-bundle\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.899735 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/532a3c72-e995-4be9-a7db-f288b6c1a311-config-data\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.899794 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/532a3c72-e995-4be9-a7db-f288b6c1a311-scripts\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.899842 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/532a3c72-e995-4be9-a7db-f288b6c1a311-public-tls-certs\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.899862 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/532a3c72-e995-4be9-a7db-f288b6c1a311-internal-tls-certs\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:52 crc kubenswrapper[4867]: I0214 04:37:52.964704 4867 scope.go:117] "RemoveContainer" containerID="4f9fbe8278c2f8217fd9d1c65cfa1d016b54bc10a1b47dd522ac53e2da5bac45" Feb 14 04:37:53 crc kubenswrapper[4867]: I0214 04:37:53.001752 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/532a3c72-e995-4be9-a7db-f288b6c1a311-combined-ca-bundle\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:53 crc kubenswrapper[4867]: I0214 04:37:53.001800 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/532a3c72-e995-4be9-a7db-f288b6c1a311-config-data\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:53 crc kubenswrapper[4867]: I0214 04:37:53.001857 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/532a3c72-e995-4be9-a7db-f288b6c1a311-scripts\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:53 crc kubenswrapper[4867]: I0214 04:37:53.001906 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/532a3c72-e995-4be9-a7db-f288b6c1a311-public-tls-certs\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:53 crc kubenswrapper[4867]: I0214 04:37:53.001926 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/532a3c72-e995-4be9-a7db-f288b6c1a311-internal-tls-certs\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:53 crc kubenswrapper[4867]: I0214 04:37:53.001980 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4jf4\" (UniqueName: \"kubernetes.io/projected/532a3c72-e995-4be9-a7db-f288b6c1a311-kube-api-access-b4jf4\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:53 crc kubenswrapper[4867]: I0214 04:37:53.019334 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/532a3c72-e995-4be9-a7db-f288b6c1a311-combined-ca-bundle\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:53 crc kubenswrapper[4867]: I0214 04:37:53.031115 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/532a3c72-e995-4be9-a7db-f288b6c1a311-internal-tls-certs\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:53 crc kubenswrapper[4867]: I0214 04:37:53.031844 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/532a3c72-e995-4be9-a7db-f288b6c1a311-config-data\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:53 crc kubenswrapper[4867]: I0214 04:37:53.034049 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/532a3c72-e995-4be9-a7db-f288b6c1a311-scripts\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:53 crc kubenswrapper[4867]: I0214 04:37:53.038141 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4jf4\" (UniqueName: \"kubernetes.io/projected/532a3c72-e995-4be9-a7db-f288b6c1a311-kube-api-access-b4jf4\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:53 crc kubenswrapper[4867]: I0214 04:37:53.041941 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/532a3c72-e995-4be9-a7db-f288b6c1a311-public-tls-certs\") pod \"aodh-0\" (UID: \"532a3c72-e995-4be9-a7db-f288b6c1a311\") " pod="openstack/aodh-0" Feb 14 04:37:53 crc kubenswrapper[4867]: I0214 04:37:53.043235 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58861691-18ee-408e-9b79-b12a411e99d0" path="/var/lib/kubelet/pods/58861691-18ee-408e-9b79-b12a411e99d0/volumes" Feb 14 04:37:53 crc kubenswrapper[4867]: I0214 04:37:53.178221 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Feb 14 04:37:53 crc kubenswrapper[4867]: I0214 04:37:53.694713 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Feb 14 04:37:53 crc kubenswrapper[4867]: W0214 04:37:53.698231 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod532a3c72_e995_4be9_a7db_f288b6c1a311.slice/crio-381dfd8a48c307eb4aade4eebfd760203b44f8b6d4481d0431e1e872168cde42 WatchSource:0}: Error finding container 381dfd8a48c307eb4aade4eebfd760203b44f8b6d4481d0431e1e872168cde42: Status 404 returned error can't find the container with id 381dfd8a48c307eb4aade4eebfd760203b44f8b6d4481d0431e1e872168cde42 Feb 14 04:37:54 crc kubenswrapper[4867]: I0214 04:37:54.770240 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"532a3c72-e995-4be9-a7db-f288b6c1a311","Type":"ContainerStarted","Data":"39999fbf2ddf3c22f5b9205c3843402abc9bb8243fcc54eedfbd407de609235f"} Feb 14 04:37:54 crc kubenswrapper[4867]: I0214 04:37:54.770764 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"532a3c72-e995-4be9-a7db-f288b6c1a311","Type":"ContainerStarted","Data":"381dfd8a48c307eb4aade4eebfd760203b44f8b6d4481d0431e1e872168cde42"} Feb 14 04:37:55 crc kubenswrapper[4867]: I0214 04:37:55.462372 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:37:55 crc kubenswrapper[4867]: I0214 04:37:55.784606 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" event={"ID":"51f6e45c-a545-4b49-b6f8-a3048619f24d","Type":"ContainerStarted","Data":"4dfb9147b07e16c62fa4639323c3d36860eb45af3594e44e8ad1917e1137afb0"} Feb 14 04:37:55 crc kubenswrapper[4867]: I0214 04:37:55.790553 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"532a3c72-e995-4be9-a7db-f288b6c1a311","Type":"ContainerStarted","Data":"378c59ea6c07febe7b47f99516a097f124d4b45c0df3c5c729a6d53fa1de580b"} Feb 14 04:37:55 crc kubenswrapper[4867]: I0214 04:37:55.813847 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" podStartSLOduration=2.399048747 podStartE2EDuration="34.813817035s" podCreationTimestamp="2026-02-14 04:37:21 +0000 UTC" firstStartedPulling="2026-02-14 04:37:23.044566844 +0000 UTC m=+1675.125504158" lastFinishedPulling="2026-02-14 04:37:55.459335132 +0000 UTC m=+1707.540272446" observedRunningTime="2026-02-14 04:37:55.802038756 +0000 UTC m=+1707.882976070" watchObservedRunningTime="2026-02-14 04:37:55.813817035 +0000 UTC m=+1707.894754359" Feb 14 04:37:57 crc kubenswrapper[4867]: I0214 04:37:57.819058 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"532a3c72-e995-4be9-a7db-f288b6c1a311","Type":"ContainerStarted","Data":"d2f3218bdb190f321c2fbe6cd36634897baca28f15e2ed125c73b6fd0acc1b07"} Feb 14 04:37:58 crc kubenswrapper[4867]: I0214 04:37:58.834198 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"532a3c72-e995-4be9-a7db-f288b6c1a311","Type":"ContainerStarted","Data":"e270e96ca58e876c7e16b4b03ffe7a632053e7fb18379eb0e83e058f2f0eec47"} Feb 14 04:37:58 crc kubenswrapper[4867]: I0214 04:37:58.880130 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.469795514 podStartE2EDuration="6.880110567s" podCreationTimestamp="2026-02-14 04:37:52 +0000 UTC" firstStartedPulling="2026-02-14 04:37:53.701477417 +0000 UTC m=+1705.782414731" lastFinishedPulling="2026-02-14 04:37:58.11179246 +0000 UTC m=+1710.192729784" observedRunningTime="2026-02-14 04:37:58.869601011 +0000 UTC m=+1710.950538325" watchObservedRunningTime="2026-02-14 04:37:58.880110567 +0000 UTC m=+1710.961047881" Feb 14 04:38:01 crc kubenswrapper[4867]: I0214 04:38:01.998352 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:38:02 crc kubenswrapper[4867]: E0214 04:38:01.999376 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:38:07 crc kubenswrapper[4867]: I0214 04:38:07.964282 4867 generic.go:334] "Generic (PLEG): container finished" podID="51f6e45c-a545-4b49-b6f8-a3048619f24d" containerID="4dfb9147b07e16c62fa4639323c3d36860eb45af3594e44e8ad1917e1137afb0" exitCode=0 Feb 14 04:38:07 crc kubenswrapper[4867]: I0214 04:38:07.964369 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" event={"ID":"51f6e45c-a545-4b49-b6f8-a3048619f24d","Type":"ContainerDied","Data":"4dfb9147b07e16c62fa4639323c3d36860eb45af3594e44e8ad1917e1137afb0"} Feb 14 04:38:09 crc kubenswrapper[4867]: I0214 04:38:09.592714 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" Feb 14 04:38:09 crc kubenswrapper[4867]: I0214 04:38:09.695295 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-repo-setup-combined-ca-bundle\") pod \"51f6e45c-a545-4b49-b6f8-a3048619f24d\" (UID: \"51f6e45c-a545-4b49-b6f8-a3048619f24d\") " Feb 14 04:38:09 crc kubenswrapper[4867]: I0214 04:38:09.695491 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-ssh-key-openstack-edpm-ipam\") pod \"51f6e45c-a545-4b49-b6f8-a3048619f24d\" (UID: \"51f6e45c-a545-4b49-b6f8-a3048619f24d\") " Feb 14 04:38:09 crc kubenswrapper[4867]: I0214 04:38:09.695780 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-inventory\") pod \"51f6e45c-a545-4b49-b6f8-a3048619f24d\" (UID: \"51f6e45c-a545-4b49-b6f8-a3048619f24d\") " Feb 14 04:38:09 crc kubenswrapper[4867]: I0214 04:38:09.695859 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtch4\" (UniqueName: \"kubernetes.io/projected/51f6e45c-a545-4b49-b6f8-a3048619f24d-kube-api-access-mtch4\") pod \"51f6e45c-a545-4b49-b6f8-a3048619f24d\" (UID: \"51f6e45c-a545-4b49-b6f8-a3048619f24d\") " Feb 14 04:38:09 crc kubenswrapper[4867]: I0214 04:38:09.709761 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "51f6e45c-a545-4b49-b6f8-a3048619f24d" (UID: "51f6e45c-a545-4b49-b6f8-a3048619f24d"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:38:09 crc kubenswrapper[4867]: I0214 04:38:09.712480 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51f6e45c-a545-4b49-b6f8-a3048619f24d-kube-api-access-mtch4" (OuterVolumeSpecName: "kube-api-access-mtch4") pod "51f6e45c-a545-4b49-b6f8-a3048619f24d" (UID: "51f6e45c-a545-4b49-b6f8-a3048619f24d"). InnerVolumeSpecName "kube-api-access-mtch4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:38:09 crc kubenswrapper[4867]: I0214 04:38:09.738776 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "51f6e45c-a545-4b49-b6f8-a3048619f24d" (UID: "51f6e45c-a545-4b49-b6f8-a3048619f24d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:38:09 crc kubenswrapper[4867]: I0214 04:38:09.738833 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-inventory" (OuterVolumeSpecName: "inventory") pod "51f6e45c-a545-4b49-b6f8-a3048619f24d" (UID: "51f6e45c-a545-4b49-b6f8-a3048619f24d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:38:09 crc kubenswrapper[4867]: I0214 04:38:09.800149 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:09 crc kubenswrapper[4867]: I0214 04:38:09.800297 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:09 crc kubenswrapper[4867]: I0214 04:38:09.800395 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtch4\" (UniqueName: \"kubernetes.io/projected/51f6e45c-a545-4b49-b6f8-a3048619f24d-kube-api-access-mtch4\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:09 crc kubenswrapper[4867]: I0214 04:38:09.800487 4867 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51f6e45c-a545-4b49-b6f8-a3048619f24d-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:09 crc kubenswrapper[4867]: I0214 04:38:09.990342 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" event={"ID":"51f6e45c-a545-4b49-b6f8-a3048619f24d","Type":"ContainerDied","Data":"1ed5f62b1367ab5d606495b7d287f182fb33167f5f1ae1565d0110ed63160b24"} Feb 14 04:38:09 crc kubenswrapper[4867]: I0214 04:38:09.990395 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ed5f62b1367ab5d606495b7d287f182fb33167f5f1ae1565d0110ed63160b24" Feb 14 04:38:09 crc kubenswrapper[4867]: I0214 04:38:09.990394 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.093031 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6"] Feb 14 04:38:10 crc kubenswrapper[4867]: E0214 04:38:10.093714 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f6e45c-a545-4b49-b6f8-a3048619f24d" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.093742 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f6e45c-a545-4b49-b6f8-a3048619f24d" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.094103 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="51f6e45c-a545-4b49-b6f8-a3048619f24d" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.095248 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.100017 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.100095 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.100242 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.100333 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.112413 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6"] Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.210486 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c240366-e845-4987-943c-afc965ddc2f4-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-drcl6\" (UID: \"0c240366-e845-4987-943c-afc965ddc2f4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.210603 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjpp8\" (UniqueName: \"kubernetes.io/projected/0c240366-e845-4987-943c-afc965ddc2f4-kube-api-access-xjpp8\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-drcl6\" (UID: \"0c240366-e845-4987-943c-afc965ddc2f4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.211484 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c240366-e845-4987-943c-afc965ddc2f4-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-drcl6\" (UID: \"0c240366-e845-4987-943c-afc965ddc2f4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.314081 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c240366-e845-4987-943c-afc965ddc2f4-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-drcl6\" (UID: \"0c240366-e845-4987-943c-afc965ddc2f4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.314210 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c240366-e845-4987-943c-afc965ddc2f4-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-drcl6\" (UID: \"0c240366-e845-4987-943c-afc965ddc2f4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.314287 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjpp8\" (UniqueName: \"kubernetes.io/projected/0c240366-e845-4987-943c-afc965ddc2f4-kube-api-access-xjpp8\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-drcl6\" (UID: \"0c240366-e845-4987-943c-afc965ddc2f4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.317738 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c240366-e845-4987-943c-afc965ddc2f4-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-drcl6\" (UID: \"0c240366-e845-4987-943c-afc965ddc2f4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.325185 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c240366-e845-4987-943c-afc965ddc2f4-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-drcl6\" (UID: \"0c240366-e845-4987-943c-afc965ddc2f4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.345157 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjpp8\" (UniqueName: \"kubernetes.io/projected/0c240366-e845-4987-943c-afc965ddc2f4-kube-api-access-xjpp8\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-drcl6\" (UID: \"0c240366-e845-4987-943c-afc965ddc2f4\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.412410 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" Feb 14 04:38:10 crc kubenswrapper[4867]: W0214 04:38:10.982470 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c240366_e845_4987_943c_afc965ddc2f4.slice/crio-616ac096f90001fabeb48cda041cbb7023d85d90c7dd445ca19f1756c6bdd174 WatchSource:0}: Error finding container 616ac096f90001fabeb48cda041cbb7023d85d90c7dd445ca19f1756c6bdd174: Status 404 returned error can't find the container with id 616ac096f90001fabeb48cda041cbb7023d85d90c7dd445ca19f1756c6bdd174 Feb 14 04:38:10 crc kubenswrapper[4867]: I0214 04:38:10.989832 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6"] Feb 14 04:38:11 crc kubenswrapper[4867]: I0214 04:38:11.015230 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" event={"ID":"0c240366-e845-4987-943c-afc965ddc2f4","Type":"ContainerStarted","Data":"616ac096f90001fabeb48cda041cbb7023d85d90c7dd445ca19f1756c6bdd174"} Feb 14 04:38:11 crc kubenswrapper[4867]: I0214 04:38:11.740692 4867 scope.go:117] "RemoveContainer" containerID="60316f17511ab27fc3a729f8ccdd9f3a0822ad95a99d3ea5ac358cbcc6ece82a" Feb 14 04:38:12 crc kubenswrapper[4867]: I0214 04:38:12.018047 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" event={"ID":"0c240366-e845-4987-943c-afc965ddc2f4","Type":"ContainerStarted","Data":"a1eef6317edb4a0f0097da785220304f9ef9d722ee3c945d26560564cc6deb12"} Feb 14 04:38:12 crc kubenswrapper[4867]: I0214 04:38:12.042941 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" podStartSLOduration=1.408118221 podStartE2EDuration="2.04292041s" podCreationTimestamp="2026-02-14 04:38:10 +0000 UTC" firstStartedPulling="2026-02-14 04:38:10.986470043 +0000 UTC m=+1723.067407357" lastFinishedPulling="2026-02-14 04:38:11.621272232 +0000 UTC m=+1723.702209546" observedRunningTime="2026-02-14 04:38:12.032047974 +0000 UTC m=+1724.112985288" watchObservedRunningTime="2026-02-14 04:38:12.04292041 +0000 UTC m=+1724.123857724" Feb 14 04:38:12 crc kubenswrapper[4867]: I0214 04:38:12.998130 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:38:12 crc kubenswrapper[4867]: E0214 04:38:12.998729 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:38:15 crc kubenswrapper[4867]: I0214 04:38:15.055394 4867 generic.go:334] "Generic (PLEG): container finished" podID="0c240366-e845-4987-943c-afc965ddc2f4" containerID="a1eef6317edb4a0f0097da785220304f9ef9d722ee3c945d26560564cc6deb12" exitCode=0 Feb 14 04:38:15 crc kubenswrapper[4867]: I0214 04:38:15.055499 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" event={"ID":"0c240366-e845-4987-943c-afc965ddc2f4","Type":"ContainerDied","Data":"a1eef6317edb4a0f0097da785220304f9ef9d722ee3c945d26560564cc6deb12"} Feb 14 04:38:16 crc kubenswrapper[4867]: I0214 04:38:16.814919 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" Feb 14 04:38:16 crc kubenswrapper[4867]: I0214 04:38:16.886404 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c240366-e845-4987-943c-afc965ddc2f4-ssh-key-openstack-edpm-ipam\") pod \"0c240366-e845-4987-943c-afc965ddc2f4\" (UID: \"0c240366-e845-4987-943c-afc965ddc2f4\") " Feb 14 04:38:16 crc kubenswrapper[4867]: I0214 04:38:16.886618 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c240366-e845-4987-943c-afc965ddc2f4-inventory\") pod \"0c240366-e845-4987-943c-afc965ddc2f4\" (UID: \"0c240366-e845-4987-943c-afc965ddc2f4\") " Feb 14 04:38:16 crc kubenswrapper[4867]: I0214 04:38:16.886741 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjpp8\" (UniqueName: \"kubernetes.io/projected/0c240366-e845-4987-943c-afc965ddc2f4-kube-api-access-xjpp8\") pod \"0c240366-e845-4987-943c-afc965ddc2f4\" (UID: \"0c240366-e845-4987-943c-afc965ddc2f4\") " Feb 14 04:38:16 crc kubenswrapper[4867]: I0214 04:38:16.897940 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c240366-e845-4987-943c-afc965ddc2f4-kube-api-access-xjpp8" (OuterVolumeSpecName: "kube-api-access-xjpp8") pod "0c240366-e845-4987-943c-afc965ddc2f4" (UID: "0c240366-e845-4987-943c-afc965ddc2f4"). InnerVolumeSpecName "kube-api-access-xjpp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:38:16 crc kubenswrapper[4867]: I0214 04:38:16.932722 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c240366-e845-4987-943c-afc965ddc2f4-inventory" (OuterVolumeSpecName: "inventory") pod "0c240366-e845-4987-943c-afc965ddc2f4" (UID: "0c240366-e845-4987-943c-afc965ddc2f4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:38:16 crc kubenswrapper[4867]: I0214 04:38:16.939009 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c240366-e845-4987-943c-afc965ddc2f4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0c240366-e845-4987-943c-afc965ddc2f4" (UID: "0c240366-e845-4987-943c-afc965ddc2f4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:38:16 crc kubenswrapper[4867]: I0214 04:38:16.989733 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c240366-e845-4987-943c-afc965ddc2f4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:16 crc kubenswrapper[4867]: I0214 04:38:16.989767 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c240366-e845-4987-943c-afc965ddc2f4-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:16 crc kubenswrapper[4867]: I0214 04:38:16.989777 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjpp8\" (UniqueName: \"kubernetes.io/projected/0c240366-e845-4987-943c-afc965ddc2f4-kube-api-access-xjpp8\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.081303 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" event={"ID":"0c240366-e845-4987-943c-afc965ddc2f4","Type":"ContainerDied","Data":"616ac096f90001fabeb48cda041cbb7023d85d90c7dd445ca19f1756c6bdd174"} Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.081351 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="616ac096f90001fabeb48cda041cbb7023d85d90c7dd445ca19f1756c6bdd174" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.081412 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-drcl6" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.153969 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9"] Feb 14 04:38:17 crc kubenswrapper[4867]: E0214 04:38:17.154479 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c240366-e845-4987-943c-afc965ddc2f4" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.154498 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c240366-e845-4987-943c-afc965ddc2f4" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.154731 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c240366-e845-4987-943c-afc965ddc2f4" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.155494 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.157371 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.157553 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.157551 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.157902 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.180228 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9"] Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.197264 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9\" (UID: \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.197398 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9\" (UID: \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.197436 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9\" (UID: \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.197550 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n99rh\" (UniqueName: \"kubernetes.io/projected/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-kube-api-access-n99rh\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9\" (UID: \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.300495 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n99rh\" (UniqueName: \"kubernetes.io/projected/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-kube-api-access-n99rh\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9\" (UID: \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.300647 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9\" (UID: \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.300772 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9\" (UID: \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.300831 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9\" (UID: \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.304189 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9\" (UID: \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.304408 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9\" (UID: \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.318478 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9\" (UID: \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.328784 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n99rh\" (UniqueName: \"kubernetes.io/projected/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-kube-api-access-n99rh\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9\" (UID: \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" Feb 14 04:38:17 crc kubenswrapper[4867]: I0214 04:38:17.478763 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" Feb 14 04:38:18 crc kubenswrapper[4867]: I0214 04:38:18.032644 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9"] Feb 14 04:38:18 crc kubenswrapper[4867]: I0214 04:38:18.094846 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" event={"ID":"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321","Type":"ContainerStarted","Data":"29be09ee8292887c8dae314e3fa0f7206f5042ff48d634e5b2ba0410adb6d585"} Feb 14 04:38:19 crc kubenswrapper[4867]: I0214 04:38:19.113696 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" event={"ID":"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321","Type":"ContainerStarted","Data":"092ff2e32550d64bb67818137543ba61871a207e331e990a8f5b06ace8a5b266"} Feb 14 04:38:19 crc kubenswrapper[4867]: I0214 04:38:19.128393 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" podStartSLOduration=1.733767801 podStartE2EDuration="2.128373579s" podCreationTimestamp="2026-02-14 04:38:17 +0000 UTC" firstStartedPulling="2026-02-14 04:38:18.037448688 +0000 UTC m=+1730.118386002" lastFinishedPulling="2026-02-14 04:38:18.432054476 +0000 UTC m=+1730.512991780" observedRunningTime="2026-02-14 04:38:19.128252526 +0000 UTC m=+1731.209189880" watchObservedRunningTime="2026-02-14 04:38:19.128373579 +0000 UTC m=+1731.209310893" Feb 14 04:38:22 crc kubenswrapper[4867]: E0214 04:38:22.909358 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82f2a63e_b256_4ad7_96ee_1def8a174cfb.slice/crio-0c997e7bc3d5f543f14547386fa8ede76fc6a555faa3b09cca505eba9cd2af8d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82f2a63e_b256_4ad7_96ee_1def8a174cfb.slice/crio-conmon-0c997e7bc3d5f543f14547386fa8ede76fc6a555faa3b09cca505eba9cd2af8d.scope\": RecentStats: unable to find data in memory cache]" Feb 14 04:38:23 crc kubenswrapper[4867]: I0214 04:38:23.176834 4867 generic.go:334] "Generic (PLEG): container finished" podID="82f2a63e-b256-4ad7-96ee-1def8a174cfb" containerID="0c997e7bc3d5f543f14547386fa8ede76fc6a555faa3b09cca505eba9cd2af8d" exitCode=0 Feb 14 04:38:23 crc kubenswrapper[4867]: I0214 04:38:23.176887 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"82f2a63e-b256-4ad7-96ee-1def8a174cfb","Type":"ContainerDied","Data":"0c997e7bc3d5f543f14547386fa8ede76fc6a555faa3b09cca505eba9cd2af8d"} Feb 14 04:38:24 crc kubenswrapper[4867]: I0214 04:38:24.187728 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"82f2a63e-b256-4ad7-96ee-1def8a174cfb","Type":"ContainerStarted","Data":"af9e2c35de2cc94006f292659a9a95da1307cfc7554fb5036d7df0d867dfc8f3"} Feb 14 04:38:24 crc kubenswrapper[4867]: I0214 04:38:24.188446 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Feb 14 04:38:24 crc kubenswrapper[4867]: I0214 04:38:24.219237 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=37.219216444 podStartE2EDuration="37.219216444s" podCreationTimestamp="2026-02-14 04:37:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:38:24.208438391 +0000 UTC m=+1736.289375725" watchObservedRunningTime="2026-02-14 04:38:24.219216444 +0000 UTC m=+1736.300153748" Feb 14 04:38:24 crc kubenswrapper[4867]: I0214 04:38:24.997331 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:38:24 crc kubenswrapper[4867]: E0214 04:38:24.997685 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:38:37 crc kubenswrapper[4867]: I0214 04:38:37.981908 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Feb 14 04:38:37 crc kubenswrapper[4867]: I0214 04:38:37.997430 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:38:37 crc kubenswrapper[4867]: E0214 04:38:37.997732 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:38:38 crc kubenswrapper[4867]: I0214 04:38:38.035526 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 04:38:42 crc kubenswrapper[4867]: I0214 04:38:42.657463 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="647ba30a-5526-4e27-9095-680c31ff4eb3" containerName="rabbitmq" containerID="cri-o://47b0dc8cf76452537b6a08713121a73a00752e3dfe3f1a9f1b2a3edca2f295a0" gracePeriod=604796 Feb 14 04:38:47 crc kubenswrapper[4867]: I0214 04:38:47.917832 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="647ba30a-5526-4e27-9095-680c31ff4eb3" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.127:5671: connect: connection refused" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.471081 4867 generic.go:334] "Generic (PLEG): container finished" podID="647ba30a-5526-4e27-9095-680c31ff4eb3" containerID="47b0dc8cf76452537b6a08713121a73a00752e3dfe3f1a9f1b2a3edca2f295a0" exitCode=0 Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.471646 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"647ba30a-5526-4e27-9095-680c31ff4eb3","Type":"ContainerDied","Data":"47b0dc8cf76452537b6a08713121a73a00752e3dfe3f1a9f1b2a3edca2f295a0"} Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.471685 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"647ba30a-5526-4e27-9095-680c31ff4eb3","Type":"ContainerDied","Data":"3dfa840147a64ccb967653d642c377ae9470c558827d87830014de26dfbf1136"} Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.471700 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dfa840147a64ccb967653d642c377ae9470c558827d87830014de26dfbf1136" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.538272 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.694323 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-erlang-cookie\") pod \"647ba30a-5526-4e27-9095-680c31ff4eb3\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.694401 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-config-data\") pod \"647ba30a-5526-4e27-9095-680c31ff4eb3\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.694438 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-plugins-conf\") pod \"647ba30a-5526-4e27-9095-680c31ff4eb3\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.694468 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-confd\") pod \"647ba30a-5526-4e27-9095-680c31ff4eb3\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.694486 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-tls\") pod \"647ba30a-5526-4e27-9095-680c31ff4eb3\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.694555 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/647ba30a-5526-4e27-9095-680c31ff4eb3-erlang-cookie-secret\") pod \"647ba30a-5526-4e27-9095-680c31ff4eb3\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.696156 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\") pod \"647ba30a-5526-4e27-9095-680c31ff4eb3\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.696287 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-server-conf\") pod \"647ba30a-5526-4e27-9095-680c31ff4eb3\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.696316 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kp9g\" (UniqueName: \"kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-kube-api-access-6kp9g\") pod \"647ba30a-5526-4e27-9095-680c31ff4eb3\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.696334 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "647ba30a-5526-4e27-9095-680c31ff4eb3" (UID: "647ba30a-5526-4e27-9095-680c31ff4eb3"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.696448 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/647ba30a-5526-4e27-9095-680c31ff4eb3-pod-info\") pod \"647ba30a-5526-4e27-9095-680c31ff4eb3\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.696493 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-plugins\") pod \"647ba30a-5526-4e27-9095-680c31ff4eb3\" (UID: \"647ba30a-5526-4e27-9095-680c31ff4eb3\") " Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.697267 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "647ba30a-5526-4e27-9095-680c31ff4eb3" (UID: "647ba30a-5526-4e27-9095-680c31ff4eb3"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.697693 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "647ba30a-5526-4e27-9095-680c31ff4eb3" (UID: "647ba30a-5526-4e27-9095-680c31ff4eb3"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.700994 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.701073 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.701093 4867 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.707955 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/647ba30a-5526-4e27-9095-680c31ff4eb3-pod-info" (OuterVolumeSpecName: "pod-info") pod "647ba30a-5526-4e27-9095-680c31ff4eb3" (UID: "647ba30a-5526-4e27-9095-680c31ff4eb3"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.712977 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/647ba30a-5526-4e27-9095-680c31ff4eb3-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "647ba30a-5526-4e27-9095-680c31ff4eb3" (UID: "647ba30a-5526-4e27-9095-680c31ff4eb3"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.737371 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-kube-api-access-6kp9g" (OuterVolumeSpecName: "kube-api-access-6kp9g") pod "647ba30a-5526-4e27-9095-680c31ff4eb3" (UID: "647ba30a-5526-4e27-9095-680c31ff4eb3"). InnerVolumeSpecName "kube-api-access-6kp9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.738621 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "647ba30a-5526-4e27-9095-680c31ff4eb3" (UID: "647ba30a-5526-4e27-9095-680c31ff4eb3"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.760092 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-config-data" (OuterVolumeSpecName: "config-data") pod "647ba30a-5526-4e27-9095-680c31ff4eb3" (UID: "647ba30a-5526-4e27-9095-680c31ff4eb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.761168 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e0ed597-0ada-4a46-9560-1f84a6822196" (OuterVolumeSpecName: "persistence") pod "647ba30a-5526-4e27-9095-680c31ff4eb3" (UID: "647ba30a-5526-4e27-9095-680c31ff4eb3"). InnerVolumeSpecName "pvc-5e0ed597-0ada-4a46-9560-1f84a6822196". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.802995 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kp9g\" (UniqueName: \"kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-kube-api-access-6kp9g\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.803235 4867 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/647ba30a-5526-4e27-9095-680c31ff4eb3-pod-info\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.803246 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.803254 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.803261 4867 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/647ba30a-5526-4e27-9095-680c31ff4eb3-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.803468 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\") on node \"crc\" " Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.814074 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-server-conf" (OuterVolumeSpecName: "server-conf") pod "647ba30a-5526-4e27-9095-680c31ff4eb3" (UID: "647ba30a-5526-4e27-9095-680c31ff4eb3"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.866018 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "647ba30a-5526-4e27-9095-680c31ff4eb3" (UID: "647ba30a-5526-4e27-9095-680c31ff4eb3"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.886555 4867 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.886788 4867 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-5e0ed597-0ada-4a46-9560-1f84a6822196" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e0ed597-0ada-4a46-9560-1f84a6822196") on node "crc" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.906404 4867 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/647ba30a-5526-4e27-9095-680c31ff4eb3-server-conf\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.906486 4867 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/647ba30a-5526-4e27-9095-680c31ff4eb3-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:49 crc kubenswrapper[4867]: I0214 04:38:49.906551 4867 reconciler_common.go:293] "Volume detached for volume \"pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\") on node \"crc\" DevicePath \"\"" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.497281 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.564568 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.572935 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.608073 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 04:38:50 crc kubenswrapper[4867]: E0214 04:38:50.611536 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="647ba30a-5526-4e27-9095-680c31ff4eb3" containerName="rabbitmq" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.611765 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="647ba30a-5526-4e27-9095-680c31ff4eb3" containerName="rabbitmq" Feb 14 04:38:50 crc kubenswrapper[4867]: E0214 04:38:50.611855 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="647ba30a-5526-4e27-9095-680c31ff4eb3" containerName="setup-container" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.611931 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="647ba30a-5526-4e27-9095-680c31ff4eb3" containerName="setup-container" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.612361 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="647ba30a-5526-4e27-9095-680c31ff4eb3" containerName="rabbitmq" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.614145 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.625743 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.638547 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7e279860-a36f-473d-a79a-a34e5820e5a6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.641827 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7e279860-a36f-473d-a79a-a34e5820e5a6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.642207 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7e279860-a36f-473d-a79a-a34e5820e5a6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.642367 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7e279860-a36f-473d-a79a-a34e5820e5a6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.642484 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7e279860-a36f-473d-a79a-a34e5820e5a6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.642838 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7e279860-a36f-473d-a79a-a34e5820e5a6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.642982 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e279860-a36f-473d-a79a-a34e5820e5a6-config-data\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.643149 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7e279860-a36f-473d-a79a-a34e5820e5a6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.643432 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.643583 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7e279860-a36f-473d-a79a-a34e5820e5a6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.643829 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qntcb\" (UniqueName: \"kubernetes.io/projected/7e279860-a36f-473d-a79a-a34e5820e5a6-kube-api-access-qntcb\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.745846 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.745892 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7e279860-a36f-473d-a79a-a34e5820e5a6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.745939 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qntcb\" (UniqueName: \"kubernetes.io/projected/7e279860-a36f-473d-a79a-a34e5820e5a6-kube-api-access-qntcb\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.746004 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7e279860-a36f-473d-a79a-a34e5820e5a6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.746020 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7e279860-a36f-473d-a79a-a34e5820e5a6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.746053 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7e279860-a36f-473d-a79a-a34e5820e5a6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.746092 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7e279860-a36f-473d-a79a-a34e5820e5a6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.746106 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7e279860-a36f-473d-a79a-a34e5820e5a6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.746190 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7e279860-a36f-473d-a79a-a34e5820e5a6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.746209 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e279860-a36f-473d-a79a-a34e5820e5a6-config-data\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.746252 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7e279860-a36f-473d-a79a-a34e5820e5a6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.747745 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7e279860-a36f-473d-a79a-a34e5820e5a6-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.748387 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7e279860-a36f-473d-a79a-a34e5820e5a6-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.751248 4867 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.751294 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b6ecbc127793ccdba0f55c49c319b455a0b3bdad6043979264d9c6d7f92205d3/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.752005 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7e279860-a36f-473d-a79a-a34e5820e5a6-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.752115 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7e279860-a36f-473d-a79a-a34e5820e5a6-config-data\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.752006 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7e279860-a36f-473d-a79a-a34e5820e5a6-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.756301 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7e279860-a36f-473d-a79a-a34e5820e5a6-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.756974 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7e279860-a36f-473d-a79a-a34e5820e5a6-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.759023 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7e279860-a36f-473d-a79a-a34e5820e5a6-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.783037 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qntcb\" (UniqueName: \"kubernetes.io/projected/7e279860-a36f-473d-a79a-a34e5820e5a6-kube-api-access-qntcb\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.786775 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7e279860-a36f-473d-a79a-a34e5820e5a6-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.855587 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5e0ed597-0ada-4a46-9560-1f84a6822196\") pod \"rabbitmq-server-0\" (UID: \"7e279860-a36f-473d-a79a-a34e5820e5a6\") " pod="openstack/rabbitmq-server-0" Feb 14 04:38:50 crc kubenswrapper[4867]: I0214 04:38:50.953753 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 14 04:38:51 crc kubenswrapper[4867]: I0214 04:38:51.023331 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="647ba30a-5526-4e27-9095-680c31ff4eb3" path="/var/lib/kubelet/pods/647ba30a-5526-4e27-9095-680c31ff4eb3/volumes" Feb 14 04:38:51 crc kubenswrapper[4867]: I0214 04:38:51.535359 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 14 04:38:51 crc kubenswrapper[4867]: I0214 04:38:51.998300 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:38:51 crc kubenswrapper[4867]: E0214 04:38:51.998966 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:38:52 crc kubenswrapper[4867]: I0214 04:38:52.554120 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7e279860-a36f-473d-a79a-a34e5820e5a6","Type":"ContainerStarted","Data":"d5401a97e1f766e18450e5ec1ee7aadecaede15c285c6fcfb043d1ff4ce891e6"} Feb 14 04:38:54 crc kubenswrapper[4867]: I0214 04:38:54.579480 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7e279860-a36f-473d-a79a-a34e5820e5a6","Type":"ContainerStarted","Data":"275e7be6a1276f951172bfaf0e7561f63cbf6ac9f3028d790a5328e07743e27c"} Feb 14 04:39:03 crc kubenswrapper[4867]: I0214 04:39:03.020551 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:39:03 crc kubenswrapper[4867]: E0214 04:39:03.037026 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:39:11 crc kubenswrapper[4867]: I0214 04:39:11.956332 4867 scope.go:117] "RemoveContainer" containerID="2985355e95eee0dc957c0e21e160693198281b44121fdf6f1cd86e16275d7eea" Feb 14 04:39:11 crc kubenswrapper[4867]: I0214 04:39:11.988665 4867 scope.go:117] "RemoveContainer" containerID="47b0dc8cf76452537b6a08713121a73a00752e3dfe3f1a9f1b2a3edca2f295a0" Feb 14 04:39:17 crc kubenswrapper[4867]: I0214 04:39:17.997208 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:39:17 crc kubenswrapper[4867]: E0214 04:39:17.998146 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:39:25 crc kubenswrapper[4867]: I0214 04:39:25.951429 4867 generic.go:334] "Generic (PLEG): container finished" podID="7e279860-a36f-473d-a79a-a34e5820e5a6" containerID="275e7be6a1276f951172bfaf0e7561f63cbf6ac9f3028d790a5328e07743e27c" exitCode=0 Feb 14 04:39:25 crc kubenswrapper[4867]: I0214 04:39:25.951518 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7e279860-a36f-473d-a79a-a34e5820e5a6","Type":"ContainerDied","Data":"275e7be6a1276f951172bfaf0e7561f63cbf6ac9f3028d790a5328e07743e27c"} Feb 14 04:39:26 crc kubenswrapper[4867]: I0214 04:39:26.974876 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7e279860-a36f-473d-a79a-a34e5820e5a6","Type":"ContainerStarted","Data":"6d3ff5bc076eb69718b7185fe7e4458fcc2cf0606b4fdca1d9beacf0ba141acb"} Feb 14 04:39:26 crc kubenswrapper[4867]: I0214 04:39:26.975739 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 14 04:39:26 crc kubenswrapper[4867]: I0214 04:39:26.998612 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.998590406 podStartE2EDuration="36.998590406s" podCreationTimestamp="2026-02-14 04:38:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 04:39:26.996113311 +0000 UTC m=+1799.077050625" watchObservedRunningTime="2026-02-14 04:39:26.998590406 +0000 UTC m=+1799.079527720" Feb 14 04:39:32 crc kubenswrapper[4867]: I0214 04:39:32.997416 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:39:32 crc kubenswrapper[4867]: E0214 04:39:32.998141 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:39:40 crc kubenswrapper[4867]: I0214 04:39:40.958735 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 14 04:39:43 crc kubenswrapper[4867]: I0214 04:39:43.998077 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:39:43 crc kubenswrapper[4867]: E0214 04:39:43.998708 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:39:56 crc kubenswrapper[4867]: I0214 04:39:56.043496 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-t56pc"] Feb 14 04:39:56 crc kubenswrapper[4867]: I0214 04:39:56.074686 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-cff6-account-create-update-ktnvw"] Feb 14 04:39:56 crc kubenswrapper[4867]: I0214 04:39:56.092352 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-t56pc"] Feb 14 04:39:56 crc kubenswrapper[4867]: I0214 04:39:56.104624 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-cff6-account-create-update-ktnvw"] Feb 14 04:39:57 crc kubenswrapper[4867]: I0214 04:39:57.011572 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fef49b7-7486-40dc-aedc-9814adb071e2" path="/var/lib/kubelet/pods/0fef49b7-7486-40dc-aedc-9814adb071e2/volumes" Feb 14 04:39:57 crc kubenswrapper[4867]: I0214 04:39:57.012961 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b72434a2-25c0-4fd4-89cf-eff7bee167c3" path="/var/lib/kubelet/pods/b72434a2-25c0-4fd4-89cf-eff7bee167c3/volumes" Feb 14 04:39:57 crc kubenswrapper[4867]: I0214 04:39:57.997836 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:39:57 crc kubenswrapper[4867]: E0214 04:39:57.998451 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:39:58 crc kubenswrapper[4867]: I0214 04:39:58.040143 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-brnhd"] Feb 14 04:39:58 crc kubenswrapper[4867]: I0214 04:39:58.054650 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-a782-account-create-update-dzhfz"] Feb 14 04:39:58 crc kubenswrapper[4867]: I0214 04:39:58.074496 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-aef7-account-create-update-w7xz9"] Feb 14 04:39:58 crc kubenswrapper[4867]: I0214 04:39:58.089261 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-a782-account-create-update-dzhfz"] Feb 14 04:39:58 crc kubenswrapper[4867]: I0214 04:39:58.100845 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-brnhd"] Feb 14 04:39:58 crc kubenswrapper[4867]: I0214 04:39:58.114086 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-qmj24"] Feb 14 04:39:58 crc kubenswrapper[4867]: I0214 04:39:58.125446 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-aef7-account-create-update-w7xz9"] Feb 14 04:39:58 crc kubenswrapper[4867]: I0214 04:39:58.136990 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-qmj24"] Feb 14 04:39:59 crc kubenswrapper[4867]: I0214 04:39:59.013116 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="853d3739-366e-498f-ac28-6df19ee88dee" path="/var/lib/kubelet/pods/853d3739-366e-498f-ac28-6df19ee88dee/volumes" Feb 14 04:39:59 crc kubenswrapper[4867]: I0214 04:39:59.015565 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af1b76a6-cc66-4a23-893d-df38ba5aac38" path="/var/lib/kubelet/pods/af1b76a6-cc66-4a23-893d-df38ba5aac38/volumes" Feb 14 04:39:59 crc kubenswrapper[4867]: I0214 04:39:59.017288 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b10f828b-59d6-4eb2-8922-aec92f274280" path="/var/lib/kubelet/pods/b10f828b-59d6-4eb2-8922-aec92f274280/volumes" Feb 14 04:39:59 crc kubenswrapper[4867]: I0214 04:39:59.018439 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e62c2a1e-55e4-4b7d-90db-ab37eecdb659" path="/var/lib/kubelet/pods/e62c2a1e-55e4-4b7d-90db-ab37eecdb659/volumes" Feb 14 04:39:59 crc kubenswrapper[4867]: I0214 04:39:59.035466 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-4f85-account-create-update-7m6h2"] Feb 14 04:39:59 crc kubenswrapper[4867]: I0214 04:39:59.048133 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-4f85-account-create-update-7m6h2"] Feb 14 04:40:00 crc kubenswrapper[4867]: I0214 04:40:00.030988 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-7klnf"] Feb 14 04:40:00 crc kubenswrapper[4867]: I0214 04:40:00.042774 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-7klnf"] Feb 14 04:40:01 crc kubenswrapper[4867]: I0214 04:40:01.013065 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1207dbcf-080a-40c2-a0cb-ab39e7225aaf" path="/var/lib/kubelet/pods/1207dbcf-080a-40c2-a0cb-ab39e7225aaf/volumes" Feb 14 04:40:01 crc kubenswrapper[4867]: I0214 04:40:01.015302 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa8913cb-b163-4973-b6e2-ac741177964e" path="/var/lib/kubelet/pods/fa8913cb-b163-4973-b6e2-ac741177964e/volumes" Feb 14 04:40:09 crc kubenswrapper[4867]: I0214 04:40:09.040787 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k"] Feb 14 04:40:09 crc kubenswrapper[4867]: I0214 04:40:09.057351 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-pjc8k"] Feb 14 04:40:09 crc kubenswrapper[4867]: I0214 04:40:09.069248 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-92c4-account-create-update-r2w8b"] Feb 14 04:40:09 crc kubenswrapper[4867]: I0214 04:40:09.078755 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-92c4-account-create-update-r2w8b"] Feb 14 04:40:10 crc kubenswrapper[4867]: I0214 04:40:10.998311 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:40:10 crc kubenswrapper[4867]: E0214 04:40:10.998903 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:40:11 crc kubenswrapper[4867]: I0214 04:40:11.012845 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a" path="/var/lib/kubelet/pods/2e27a3cb-c301-4fa0-b9a1-9aa3bac0305a/volumes" Feb 14 04:40:11 crc kubenswrapper[4867]: I0214 04:40:11.014008 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36e07f1b-6481-42a9-a605-b472a8cc3945" path="/var/lib/kubelet/pods/36e07f1b-6481-42a9-a605-b472a8cc3945/volumes" Feb 14 04:40:12 crc kubenswrapper[4867]: I0214 04:40:12.109778 4867 scope.go:117] "RemoveContainer" containerID="50f6a1e55c135273f16192c4d930b15a06776fce11c739aadacaa3a89fc4b153" Feb 14 04:40:12 crc kubenswrapper[4867]: I0214 04:40:12.134195 4867 scope.go:117] "RemoveContainer" containerID="6169e5fdf0e74fe086570773b95de46198a0244319d8d869f06e9d58ae4d08cb" Feb 14 04:40:12 crc kubenswrapper[4867]: I0214 04:40:12.198604 4867 scope.go:117] "RemoveContainer" containerID="41305e93b907718ed0332e27cd0c47623d93ba3f8546dbde9032dfe08f5e2a6c" Feb 14 04:40:12 crc kubenswrapper[4867]: I0214 04:40:12.260185 4867 scope.go:117] "RemoveContainer" containerID="659356ffd1920059def60984a1f291aad46ef6d15393b55c49987a54a05704a7" Feb 14 04:40:12 crc kubenswrapper[4867]: I0214 04:40:12.331339 4867 scope.go:117] "RemoveContainer" containerID="7ee48e595ead334c45b0c14aeec7251dc9cd4d60d85c2a40a47348b3ee0e687a" Feb 14 04:40:12 crc kubenswrapper[4867]: I0214 04:40:12.381720 4867 scope.go:117] "RemoveContainer" containerID="f4258135bf11c6ed1dd99f5c1f581fcb97da6e22ed3370067c3b4edacd5e6962" Feb 14 04:40:12 crc kubenswrapper[4867]: I0214 04:40:12.435843 4867 scope.go:117] "RemoveContainer" containerID="4f99901f0da4b1da0863796edd2dde44662d1bb2b2807e64f939fdf575d0e6af" Feb 14 04:40:12 crc kubenswrapper[4867]: I0214 04:40:12.457097 4867 scope.go:117] "RemoveContainer" containerID="027f7b47ecf95746bb9733dbd606f94b7866eecb1f1ce8cb4d1598a367884200" Feb 14 04:40:12 crc kubenswrapper[4867]: I0214 04:40:12.485071 4867 scope.go:117] "RemoveContainer" containerID="ae0a83f28bdc3a06d4663a0d9d8e67b0716eee94221bc552fd5d22ba9ecc6605" Feb 14 04:40:12 crc kubenswrapper[4867]: I0214 04:40:12.524038 4867 scope.go:117] "RemoveContainer" containerID="4331549532fda4f50fc6d3ddd019e8a773925579f6102f8ec4140112305629a4" Feb 14 04:40:12 crc kubenswrapper[4867]: I0214 04:40:12.551139 4867 scope.go:117] "RemoveContainer" containerID="63b1841b94ccfe878085e7aaa4ff2044786571fd3492e4ffbe7576e35506afb2" Feb 14 04:40:23 crc kubenswrapper[4867]: I0214 04:40:23.997354 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:40:23 crc kubenswrapper[4867]: E0214 04:40:23.998380 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:40:26 crc kubenswrapper[4867]: I0214 04:40:26.048748 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-k62wg"] Feb 14 04:40:26 crc kubenswrapper[4867]: I0214 04:40:26.062932 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-k62wg"] Feb 14 04:40:27 crc kubenswrapper[4867]: I0214 04:40:27.009823 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0d44618-795d-4cc5-a98b-c0c5d77ffdcb" path="/var/lib/kubelet/pods/f0d44618-795d-4cc5-a98b-c0c5d77ffdcb/volumes" Feb 14 04:40:34 crc kubenswrapper[4867]: I0214 04:40:34.997158 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:40:34 crc kubenswrapper[4867]: E0214 04:40:34.998057 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:40:48 crc kubenswrapper[4867]: I0214 04:40:48.032158 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-gzvxs"] Feb 14 04:40:48 crc kubenswrapper[4867]: I0214 04:40:48.046755 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-gzvxs"] Feb 14 04:40:49 crc kubenswrapper[4867]: I0214 04:40:49.010902 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:40:49 crc kubenswrapper[4867]: I0214 04:40:49.011915 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2" path="/var/lib/kubelet/pods/e0c27ba6-c090-4bb9-a3cc-25e3c5f117e2/volumes" Feb 14 04:40:49 crc kubenswrapper[4867]: E0214 04:40:49.012130 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:40:55 crc kubenswrapper[4867]: I0214 04:40:55.051908 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-fad3-account-create-update-zwwh5"] Feb 14 04:40:55 crc kubenswrapper[4867]: I0214 04:40:55.067015 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-f62v7"] Feb 14 04:40:55 crc kubenswrapper[4867]: I0214 04:40:55.086876 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-9vmb7"] Feb 14 04:40:55 crc kubenswrapper[4867]: I0214 04:40:55.100340 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-fad3-account-create-update-zwwh5"] Feb 14 04:40:55 crc kubenswrapper[4867]: I0214 04:40:55.111700 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-9vmb7"] Feb 14 04:40:55 crc kubenswrapper[4867]: I0214 04:40:55.123356 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-f62v7"] Feb 14 04:40:57 crc kubenswrapper[4867]: I0214 04:40:57.010877 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c993d62-94a7-4903-b984-adcef36b53b8" path="/var/lib/kubelet/pods/9c993d62-94a7-4903-b984-adcef36b53b8/volumes" Feb 14 04:40:57 crc kubenswrapper[4867]: I0214 04:40:57.012786 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd001336-81f9-43f6-9540-432047e6c98a" path="/var/lib/kubelet/pods/bd001336-81f9-43f6-9540-432047e6c98a/volumes" Feb 14 04:40:57 crc kubenswrapper[4867]: I0214 04:40:57.014166 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f90d34b6-263e-4515-a13a-a41fda1c40ca" path="/var/lib/kubelet/pods/f90d34b6-263e-4515-a13a-a41fda1c40ca/volumes" Feb 14 04:40:59 crc kubenswrapper[4867]: I0214 04:40:59.038208 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-3b6b-account-create-update-74g2s"] Feb 14 04:40:59 crc kubenswrapper[4867]: I0214 04:40:59.056723 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-3b6b-account-create-update-74g2s"] Feb 14 04:40:59 crc kubenswrapper[4867]: I0214 04:40:59.071935 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-07f7-account-create-update-k24c7"] Feb 14 04:40:59 crc kubenswrapper[4867]: I0214 04:40:59.083390 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-8zqfs"] Feb 14 04:40:59 crc kubenswrapper[4867]: I0214 04:40:59.094923 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-7kcws"] Feb 14 04:40:59 crc kubenswrapper[4867]: I0214 04:40:59.105192 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-07f7-account-create-update-k24c7"] Feb 14 04:40:59 crc kubenswrapper[4867]: I0214 04:40:59.116899 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-7kcws"] Feb 14 04:40:59 crc kubenswrapper[4867]: I0214 04:40:59.127963 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bab0-account-create-update-kmfpg"] Feb 14 04:40:59 crc kubenswrapper[4867]: I0214 04:40:59.138000 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-8zqfs"] Feb 14 04:40:59 crc kubenswrapper[4867]: I0214 04:40:59.148178 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-bab0-account-create-update-kmfpg"] Feb 14 04:40:59 crc kubenswrapper[4867]: I0214 04:40:59.997552 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:40:59 crc kubenswrapper[4867]: E0214 04:40:59.998018 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:41:01 crc kubenswrapper[4867]: I0214 04:41:01.011916 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c5e9025-3781-4461-98d7-0d0d72c3b59b" path="/var/lib/kubelet/pods/2c5e9025-3781-4461-98d7-0d0d72c3b59b/volumes" Feb 14 04:41:01 crc kubenswrapper[4867]: I0214 04:41:01.012979 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6961722f-b14d-42f2-bd56-68686c2e8a9a" path="/var/lib/kubelet/pods/6961722f-b14d-42f2-bd56-68686c2e8a9a/volumes" Feb 14 04:41:01 crc kubenswrapper[4867]: I0214 04:41:01.014638 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1826e5b-3563-455f-9caf-9c4ee203210f" path="/var/lib/kubelet/pods/b1826e5b-3563-455f-9caf-9c4ee203210f/volumes" Feb 14 04:41:01 crc kubenswrapper[4867]: I0214 04:41:01.016223 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c14b9ea2-b4ee-4365-8b77-d58ff122fabb" path="/var/lib/kubelet/pods/c14b9ea2-b4ee-4365-8b77-d58ff122fabb/volumes" Feb 14 04:41:01 crc kubenswrapper[4867]: I0214 04:41:01.018926 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1f3a1a1-5734-4782-98e1-1eb22cfbdf93" path="/var/lib/kubelet/pods/d1f3a1a1-5734-4782-98e1-1eb22cfbdf93/volumes" Feb 14 04:41:04 crc kubenswrapper[4867]: I0214 04:41:04.037583 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-gk75z"] Feb 14 04:41:04 crc kubenswrapper[4867]: I0214 04:41:04.049809 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-gk75z"] Feb 14 04:41:05 crc kubenswrapper[4867]: I0214 04:41:05.012170 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49af28f1-d33f-4717-81a7-4377bfef388c" path="/var/lib/kubelet/pods/49af28f1-d33f-4717-81a7-4377bfef388c/volumes" Feb 14 04:41:12 crc kubenswrapper[4867]: I0214 04:41:12.829627 4867 scope.go:117] "RemoveContainer" containerID="8ee377ab9df59755c2608bf160912f4986e5a570c0b163efea645d0bbf2907f0" Feb 14 04:41:12 crc kubenswrapper[4867]: I0214 04:41:12.855016 4867 scope.go:117] "RemoveContainer" containerID="8042db461fd6eabaa93681751cc5037c8a7ddd74046cd943405dc18cc37f069c" Feb 14 04:41:12 crc kubenswrapper[4867]: I0214 04:41:12.882894 4867 scope.go:117] "RemoveContainer" containerID="cb180091e4ae70970aa78bde495475b793634681199f41c69a03b8635b020332" Feb 14 04:41:12 crc kubenswrapper[4867]: I0214 04:41:12.963571 4867 scope.go:117] "RemoveContainer" containerID="8d4513234d1fad24212cdf82718a385562881173fcd13074ff0a12c06d73e620" Feb 14 04:41:13 crc kubenswrapper[4867]: I0214 04:41:13.020802 4867 scope.go:117] "RemoveContainer" containerID="645d09ab3ab20918409aff17c8b3710b4ffbfa06ad1a509445fe4ca8b7901e2d" Feb 14 04:41:13 crc kubenswrapper[4867]: I0214 04:41:13.074856 4867 scope.go:117] "RemoveContainer" containerID="abb5bce0228ffe2b4f577c72d541587bc9ccc14c780b4813bbfbccab7bd48336" Feb 14 04:41:13 crc kubenswrapper[4867]: I0214 04:41:13.101617 4867 scope.go:117] "RemoveContainer" containerID="4f77da80359dbcaaf7f1b0862edf00e5f51cbdfe953464edb0d8a0f3cd5a1425" Feb 14 04:41:13 crc kubenswrapper[4867]: I0214 04:41:13.159930 4867 scope.go:117] "RemoveContainer" containerID="b0ee3d8476bae8f4a3fe8c62bb7c061a9556901f3c45531ad9e5c2cc20102b49" Feb 14 04:41:13 crc kubenswrapper[4867]: I0214 04:41:13.223035 4867 scope.go:117] "RemoveContainer" containerID="e481f6b0c38be3cb0239424de842f33edc585ce836916de0d7d544ab198683d3" Feb 14 04:41:13 crc kubenswrapper[4867]: I0214 04:41:13.255561 4867 scope.go:117] "RemoveContainer" containerID="2bdf28b1e859bb5d2211947dae2797aa206db181b3539ea0de854f0f3e6d89c6" Feb 14 04:41:13 crc kubenswrapper[4867]: I0214 04:41:13.280889 4867 scope.go:117] "RemoveContainer" containerID="cbf0ef6610c1740254fda0700aa42a6fdd3885fcc7d65e0c4bc4ef1fc1f78288" Feb 14 04:41:13 crc kubenswrapper[4867]: I0214 04:41:13.301147 4867 scope.go:117] "RemoveContainer" containerID="da8ab728620d5f0651397fa356c829bf5bff0ab2414fec4cf72bb2494ac4d8b1" Feb 14 04:41:13 crc kubenswrapper[4867]: I0214 04:41:13.319316 4867 scope.go:117] "RemoveContainer" containerID="bd098d1d3f5431ee4dfc77512f72bdb3c684d719a4f758c6fe63d5e6f0d5b682" Feb 14 04:41:13 crc kubenswrapper[4867]: I0214 04:41:13.340016 4867 scope.go:117] "RemoveContainer" containerID="dac7c15e8d204db1888f9efc6944db09a4f811e1647c31593e86131c9a51b98c" Feb 14 04:41:13 crc kubenswrapper[4867]: I0214 04:41:13.358923 4867 scope.go:117] "RemoveContainer" containerID="d05fe3ff5d6d0b733fa083ac07e6cf3331ccf5ca5bbba2a8f738913293195786" Feb 14 04:41:13 crc kubenswrapper[4867]: I0214 04:41:13.997074 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:41:14 crc kubenswrapper[4867]: I0214 04:41:14.266696 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"8ef22e983ed33de6916be45630c900d98abc980cea24a0e66ba99e9fbf263b65"} Feb 14 04:41:24 crc kubenswrapper[4867]: I0214 04:41:24.385966 4867 generic.go:334] "Generic (PLEG): container finished" podID="e3d43ea0-54e7-4fd1-892d-bbc3d01a5321" containerID="092ff2e32550d64bb67818137543ba61871a207e331e990a8f5b06ace8a5b266" exitCode=0 Feb 14 04:41:24 crc kubenswrapper[4867]: I0214 04:41:24.386102 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" event={"ID":"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321","Type":"ContainerDied","Data":"092ff2e32550d64bb67818137543ba61871a207e331e990a8f5b06ace8a5b266"} Feb 14 04:41:25 crc kubenswrapper[4867]: I0214 04:41:25.903744 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.000427 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-bootstrap-combined-ca-bundle\") pod \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\" (UID: \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\") " Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.000481 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-ssh-key-openstack-edpm-ipam\") pod \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\" (UID: \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\") " Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.000621 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n99rh\" (UniqueName: \"kubernetes.io/projected/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-kube-api-access-n99rh\") pod \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\" (UID: \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\") " Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.000733 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-inventory\") pod \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\" (UID: \"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321\") " Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.008315 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-kube-api-access-n99rh" (OuterVolumeSpecName: "kube-api-access-n99rh") pod "e3d43ea0-54e7-4fd1-892d-bbc3d01a5321" (UID: "e3d43ea0-54e7-4fd1-892d-bbc3d01a5321"). InnerVolumeSpecName "kube-api-access-n99rh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.009163 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e3d43ea0-54e7-4fd1-892d-bbc3d01a5321" (UID: "e3d43ea0-54e7-4fd1-892d-bbc3d01a5321"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.043564 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e3d43ea0-54e7-4fd1-892d-bbc3d01a5321" (UID: "e3d43ea0-54e7-4fd1-892d-bbc3d01a5321"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.047367 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-inventory" (OuterVolumeSpecName: "inventory") pod "e3d43ea0-54e7-4fd1-892d-bbc3d01a5321" (UID: "e3d43ea0-54e7-4fd1-892d-bbc3d01a5321"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.109636 4867 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.109688 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.109699 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n99rh\" (UniqueName: \"kubernetes.io/projected/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-kube-api-access-n99rh\") on node \"crc\" DevicePath \"\"" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.109710 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e3d43ea0-54e7-4fd1-892d-bbc3d01a5321-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.409955 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" event={"ID":"e3d43ea0-54e7-4fd1-892d-bbc3d01a5321","Type":"ContainerDied","Data":"29be09ee8292887c8dae314e3fa0f7206f5042ff48d634e5b2ba0410adb6d585"} Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.410527 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29be09ee8292887c8dae314e3fa0f7206f5042ff48d634e5b2ba0410adb6d585" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.410091 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.512391 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs"] Feb 14 04:41:26 crc kubenswrapper[4867]: E0214 04:41:26.513594 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3d43ea0-54e7-4fd1-892d-bbc3d01a5321" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.513721 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3d43ea0-54e7-4fd1-892d-bbc3d01a5321" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.514119 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3d43ea0-54e7-4fd1-892d-bbc3d01a5321" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.515200 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.517646 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.517938 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.518290 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.518613 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.527268 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs"] Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.629088 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89f9d\" (UniqueName: \"kubernetes.io/projected/879dee23-804e-4b8a-ac20-0546383202b0-kube-api-access-89f9d\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs\" (UID: \"879dee23-804e-4b8a-ac20-0546383202b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.629235 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/879dee23-804e-4b8a-ac20-0546383202b0-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs\" (UID: \"879dee23-804e-4b8a-ac20-0546383202b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.630024 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/879dee23-804e-4b8a-ac20-0546383202b0-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs\" (UID: \"879dee23-804e-4b8a-ac20-0546383202b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.732594 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89f9d\" (UniqueName: \"kubernetes.io/projected/879dee23-804e-4b8a-ac20-0546383202b0-kube-api-access-89f9d\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs\" (UID: \"879dee23-804e-4b8a-ac20-0546383202b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.732705 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/879dee23-804e-4b8a-ac20-0546383202b0-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs\" (UID: \"879dee23-804e-4b8a-ac20-0546383202b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.732834 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/879dee23-804e-4b8a-ac20-0546383202b0-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs\" (UID: \"879dee23-804e-4b8a-ac20-0546383202b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.739669 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/879dee23-804e-4b8a-ac20-0546383202b0-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs\" (UID: \"879dee23-804e-4b8a-ac20-0546383202b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.739973 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/879dee23-804e-4b8a-ac20-0546383202b0-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs\" (UID: \"879dee23-804e-4b8a-ac20-0546383202b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.754016 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89f9d\" (UniqueName: \"kubernetes.io/projected/879dee23-804e-4b8a-ac20-0546383202b0-kube-api-access-89f9d\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs\" (UID: \"879dee23-804e-4b8a-ac20-0546383202b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" Feb 14 04:41:26 crc kubenswrapper[4867]: I0214 04:41:26.906912 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" Feb 14 04:41:27 crc kubenswrapper[4867]: I0214 04:41:27.955423 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs"] Feb 14 04:41:27 crc kubenswrapper[4867]: I0214 04:41:27.962119 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 04:41:28 crc kubenswrapper[4867]: I0214 04:41:28.443261 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" event={"ID":"879dee23-804e-4b8a-ac20-0546383202b0","Type":"ContainerStarted","Data":"0ea6ba3fbefa772411725e98100241b5cb4626f4565f14146e95e611286f63e9"} Feb 14 04:41:29 crc kubenswrapper[4867]: I0214 04:41:29.457132 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" event={"ID":"879dee23-804e-4b8a-ac20-0546383202b0","Type":"ContainerStarted","Data":"88077af96545be122279d1b3f191975503bdfd1844ffea9e66c95cf4f20aead0"} Feb 14 04:41:29 crc kubenswrapper[4867]: I0214 04:41:29.554962 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" podStartSLOduration=3.159906951 podStartE2EDuration="3.55493972s" podCreationTimestamp="2026-02-14 04:41:26 +0000 UTC" firstStartedPulling="2026-02-14 04:41:27.961880355 +0000 UTC m=+1920.042817669" lastFinishedPulling="2026-02-14 04:41:28.356913124 +0000 UTC m=+1920.437850438" observedRunningTime="2026-02-14 04:41:29.544576238 +0000 UTC m=+1921.625513562" watchObservedRunningTime="2026-02-14 04:41:29.55493972 +0000 UTC m=+1921.635877034" Feb 14 04:41:36 crc kubenswrapper[4867]: I0214 04:41:36.049296 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-425tq"] Feb 14 04:41:36 crc kubenswrapper[4867]: I0214 04:41:36.068369 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-425tq"] Feb 14 04:41:37 crc kubenswrapper[4867]: I0214 04:41:37.014953 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed6edd10-56a9-4431-bb38-7b266f802e63" path="/var/lib/kubelet/pods/ed6edd10-56a9-4431-bb38-7b266f802e63/volumes" Feb 14 04:41:47 crc kubenswrapper[4867]: I0214 04:41:47.074725 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-9zrmj"] Feb 14 04:41:47 crc kubenswrapper[4867]: I0214 04:41:47.086715 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-9zrmj"] Feb 14 04:41:48 crc kubenswrapper[4867]: I0214 04:41:48.043812 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-gdzwh"] Feb 14 04:41:48 crc kubenswrapper[4867]: I0214 04:41:48.064032 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-gdzwh"] Feb 14 04:41:49 crc kubenswrapper[4867]: I0214 04:41:49.020225 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87589008-b930-4698-b94b-883c707d5fb1" path="/var/lib/kubelet/pods/87589008-b930-4698-b94b-883c707d5fb1/volumes" Feb 14 04:41:49 crc kubenswrapper[4867]: I0214 04:41:49.021273 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffefbab2-8288-4eaa-9df3-e95383cdf19d" path="/var/lib/kubelet/pods/ffefbab2-8288-4eaa-9df3-e95383cdf19d/volumes" Feb 14 04:41:58 crc kubenswrapper[4867]: I0214 04:41:58.050427 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-mklx7"] Feb 14 04:41:58 crc kubenswrapper[4867]: I0214 04:41:58.066816 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-mklx7"] Feb 14 04:41:59 crc kubenswrapper[4867]: I0214 04:41:59.010881 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cccb73cc-2b89-4363-b7ca-44dfa627d9f9" path="/var/lib/kubelet/pods/cccb73cc-2b89-4363-b7ca-44dfa627d9f9/volumes" Feb 14 04:41:59 crc kubenswrapper[4867]: I0214 04:41:59.039409 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-grkqh"] Feb 14 04:41:59 crc kubenswrapper[4867]: I0214 04:41:59.052284 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-grkqh"] Feb 14 04:42:01 crc kubenswrapper[4867]: I0214 04:42:01.011203 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c973bde-ff14-4cce-9f9c-57354dbd4adb" path="/var/lib/kubelet/pods/9c973bde-ff14-4cce-9f9c-57354dbd4adb/volumes" Feb 14 04:42:13 crc kubenswrapper[4867]: I0214 04:42:13.691040 4867 scope.go:117] "RemoveContainer" containerID="933362dc125c07b501be0afbe062e3a9150917f293f02be88bdfafccd96cea38" Feb 14 04:42:13 crc kubenswrapper[4867]: I0214 04:42:13.733077 4867 scope.go:117] "RemoveContainer" containerID="b4af422ec473bd7a3a6d6b89b2e7229c4375e35cf75e8494db638d7095f07468" Feb 14 04:42:13 crc kubenswrapper[4867]: I0214 04:42:13.795599 4867 scope.go:117] "RemoveContainer" containerID="cbc1c766da784a3e5453caf17699272e324db8e8f9f9c7202b12542f06aac4da" Feb 14 04:42:13 crc kubenswrapper[4867]: I0214 04:42:13.864040 4867 scope.go:117] "RemoveContainer" containerID="f215c5a914efdb087a943f5dda611b846de12406e04a977d9c6c6acb8ed9e635" Feb 14 04:42:13 crc kubenswrapper[4867]: I0214 04:42:13.924274 4867 scope.go:117] "RemoveContainer" containerID="42546acb8bf1d18a2013b6f620e8fb872f570e002bf0d9270838f9f12f95b201" Feb 14 04:42:44 crc kubenswrapper[4867]: I0214 04:42:44.045534 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-8539-account-create-update-9j9p8"] Feb 14 04:42:44 crc kubenswrapper[4867]: I0214 04:42:44.083558 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-5ffts"] Feb 14 04:42:44 crc kubenswrapper[4867]: I0214 04:42:44.098344 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-8539-account-create-update-9j9p8"] Feb 14 04:42:44 crc kubenswrapper[4867]: I0214 04:42:44.111335 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-5ffts"] Feb 14 04:42:45 crc kubenswrapper[4867]: I0214 04:42:45.010212 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="289f81c2-9092-4a51-a1b4-8eedaa09aedb" path="/var/lib/kubelet/pods/289f81c2-9092-4a51-a1b4-8eedaa09aedb/volumes" Feb 14 04:42:45 crc kubenswrapper[4867]: I0214 04:42:45.011116 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b7729cf-7332-4432-999f-fbee997b2201" path="/var/lib/kubelet/pods/2b7729cf-7332-4432-999f-fbee997b2201/volumes" Feb 14 04:42:45 crc kubenswrapper[4867]: I0214 04:42:45.031080 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-t8trt"] Feb 14 04:42:45 crc kubenswrapper[4867]: I0214 04:42:45.042021 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-t8trt"] Feb 14 04:42:46 crc kubenswrapper[4867]: I0214 04:42:46.038987 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-slfhr"] Feb 14 04:42:46 crc kubenswrapper[4867]: I0214 04:42:46.062204 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-a338-account-create-update-2zjhb"] Feb 14 04:42:46 crc kubenswrapper[4867]: I0214 04:42:46.080105 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-8094-account-create-update-pbbgl"] Feb 14 04:42:46 crc kubenswrapper[4867]: I0214 04:42:46.090432 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-slfhr"] Feb 14 04:42:46 crc kubenswrapper[4867]: I0214 04:42:46.101169 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-8094-account-create-update-pbbgl"] Feb 14 04:42:46 crc kubenswrapper[4867]: I0214 04:42:46.112959 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-a338-account-create-update-2zjhb"] Feb 14 04:42:47 crc kubenswrapper[4867]: I0214 04:42:47.012539 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="041c55d6-87c7-47b4-a53b-9b38cb85e3d2" path="/var/lib/kubelet/pods/041c55d6-87c7-47b4-a53b-9b38cb85e3d2/volumes" Feb 14 04:42:47 crc kubenswrapper[4867]: I0214 04:42:47.014628 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="708fbc3f-a05a-4b29-b455-32db117495d1" path="/var/lib/kubelet/pods/708fbc3f-a05a-4b29-b455-32db117495d1/volumes" Feb 14 04:42:47 crc kubenswrapper[4867]: I0214 04:42:47.015948 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="730dbd9b-ddff-4d09-89ff-b9135ed83042" path="/var/lib/kubelet/pods/730dbd9b-ddff-4d09-89ff-b9135ed83042/volumes" Feb 14 04:42:47 crc kubenswrapper[4867]: I0214 04:42:47.019768 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80c71d92-a9d1-4256-b7be-678dc34d1562" path="/var/lib/kubelet/pods/80c71d92-a9d1-4256-b7be-678dc34d1562/volumes" Feb 14 04:43:14 crc kubenswrapper[4867]: I0214 04:43:14.103100 4867 scope.go:117] "RemoveContainer" containerID="d2f2315be8742d702e7dd2d0f528c431c081e7e1ce092b2f26f01dd567075c43" Feb 14 04:43:14 crc kubenswrapper[4867]: I0214 04:43:14.232097 4867 scope.go:117] "RemoveContainer" containerID="ac04f78f97056d2b2550db33626b10963bebb9d175cf60c35210d274045c9458" Feb 14 04:43:14 crc kubenswrapper[4867]: I0214 04:43:14.259400 4867 scope.go:117] "RemoveContainer" containerID="3e1ef6da3ebdc2673f2981d47e0b77af1c8ade8d3cd5fb3292ef5cb9e14386e5" Feb 14 04:43:14 crc kubenswrapper[4867]: I0214 04:43:14.311590 4867 scope.go:117] "RemoveContainer" containerID="0c5aa3d36bd716587576d157b08b003ad1372b31da48794e4d003f7f4a82a1b3" Feb 14 04:43:14 crc kubenswrapper[4867]: I0214 04:43:14.369416 4867 scope.go:117] "RemoveContainer" containerID="edb8483472d537c583af237081de995fee4a32c9b18a192549b88c1b5ca41e5a" Feb 14 04:43:14 crc kubenswrapper[4867]: I0214 04:43:14.436090 4867 scope.go:117] "RemoveContainer" containerID="6bd7d606fb9b6188c28f7b964e2aed897ff801c850465bbc0ee30e5f3fa5796c" Feb 14 04:43:18 crc kubenswrapper[4867]: I0214 04:43:18.686936 4867 generic.go:334] "Generic (PLEG): container finished" podID="879dee23-804e-4b8a-ac20-0546383202b0" containerID="88077af96545be122279d1b3f191975503bdfd1844ffea9e66c95cf4f20aead0" exitCode=0 Feb 14 04:43:18 crc kubenswrapper[4867]: I0214 04:43:18.687034 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" event={"ID":"879dee23-804e-4b8a-ac20-0546383202b0","Type":"ContainerDied","Data":"88077af96545be122279d1b3f191975503bdfd1844ffea9e66c95cf4f20aead0"} Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.249052 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.311717 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89f9d\" (UniqueName: \"kubernetes.io/projected/879dee23-804e-4b8a-ac20-0546383202b0-kube-api-access-89f9d\") pod \"879dee23-804e-4b8a-ac20-0546383202b0\" (UID: \"879dee23-804e-4b8a-ac20-0546383202b0\") " Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.311836 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/879dee23-804e-4b8a-ac20-0546383202b0-ssh-key-openstack-edpm-ipam\") pod \"879dee23-804e-4b8a-ac20-0546383202b0\" (UID: \"879dee23-804e-4b8a-ac20-0546383202b0\") " Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.312192 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/879dee23-804e-4b8a-ac20-0546383202b0-inventory\") pod \"879dee23-804e-4b8a-ac20-0546383202b0\" (UID: \"879dee23-804e-4b8a-ac20-0546383202b0\") " Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.322888 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/879dee23-804e-4b8a-ac20-0546383202b0-kube-api-access-89f9d" (OuterVolumeSpecName: "kube-api-access-89f9d") pod "879dee23-804e-4b8a-ac20-0546383202b0" (UID: "879dee23-804e-4b8a-ac20-0546383202b0"). InnerVolumeSpecName "kube-api-access-89f9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.356329 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/879dee23-804e-4b8a-ac20-0546383202b0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "879dee23-804e-4b8a-ac20-0546383202b0" (UID: "879dee23-804e-4b8a-ac20-0546383202b0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.365702 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/879dee23-804e-4b8a-ac20-0546383202b0-inventory" (OuterVolumeSpecName: "inventory") pod "879dee23-804e-4b8a-ac20-0546383202b0" (UID: "879dee23-804e-4b8a-ac20-0546383202b0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.416271 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89f9d\" (UniqueName: \"kubernetes.io/projected/879dee23-804e-4b8a-ac20-0546383202b0-kube-api-access-89f9d\") on node \"crc\" DevicePath \"\"" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.416303 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/879dee23-804e-4b8a-ac20-0546383202b0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.416316 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/879dee23-804e-4b8a-ac20-0546383202b0-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.708644 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" event={"ID":"879dee23-804e-4b8a-ac20-0546383202b0","Type":"ContainerDied","Data":"0ea6ba3fbefa772411725e98100241b5cb4626f4565f14146e95e611286f63e9"} Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.709055 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ea6ba3fbefa772411725e98100241b5cb4626f4565f14146e95e611286f63e9" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.708693 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.807530 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf"] Feb 14 04:43:20 crc kubenswrapper[4867]: E0214 04:43:20.808202 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="879dee23-804e-4b8a-ac20-0546383202b0" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.808229 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="879dee23-804e-4b8a-ac20-0546383202b0" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.808558 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="879dee23-804e-4b8a-ac20-0546383202b0" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.809821 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.813787 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.813840 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.814132 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.814395 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.838549 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf"] Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.927485 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a716bc3f-98b5-4c50-af5f-46de007bd255-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf\" (UID: \"a716bc3f-98b5-4c50-af5f-46de007bd255\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.927596 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a716bc3f-98b5-4c50-af5f-46de007bd255-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf\" (UID: \"a716bc3f-98b5-4c50-af5f-46de007bd255\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" Feb 14 04:43:20 crc kubenswrapper[4867]: I0214 04:43:20.928102 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mc6s\" (UniqueName: \"kubernetes.io/projected/a716bc3f-98b5-4c50-af5f-46de007bd255-kube-api-access-9mc6s\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf\" (UID: \"a716bc3f-98b5-4c50-af5f-46de007bd255\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" Feb 14 04:43:21 crc kubenswrapper[4867]: I0214 04:43:21.030667 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a716bc3f-98b5-4c50-af5f-46de007bd255-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf\" (UID: \"a716bc3f-98b5-4c50-af5f-46de007bd255\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" Feb 14 04:43:21 crc kubenswrapper[4867]: I0214 04:43:21.030733 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a716bc3f-98b5-4c50-af5f-46de007bd255-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf\" (UID: \"a716bc3f-98b5-4c50-af5f-46de007bd255\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" Feb 14 04:43:21 crc kubenswrapper[4867]: I0214 04:43:21.030915 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mc6s\" (UniqueName: \"kubernetes.io/projected/a716bc3f-98b5-4c50-af5f-46de007bd255-kube-api-access-9mc6s\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf\" (UID: \"a716bc3f-98b5-4c50-af5f-46de007bd255\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" Feb 14 04:43:21 crc kubenswrapper[4867]: I0214 04:43:21.035493 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a716bc3f-98b5-4c50-af5f-46de007bd255-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf\" (UID: \"a716bc3f-98b5-4c50-af5f-46de007bd255\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" Feb 14 04:43:21 crc kubenswrapper[4867]: I0214 04:43:21.046897 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mc6s\" (UniqueName: \"kubernetes.io/projected/a716bc3f-98b5-4c50-af5f-46de007bd255-kube-api-access-9mc6s\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf\" (UID: \"a716bc3f-98b5-4c50-af5f-46de007bd255\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" Feb 14 04:43:21 crc kubenswrapper[4867]: I0214 04:43:21.048364 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a716bc3f-98b5-4c50-af5f-46de007bd255-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf\" (UID: \"a716bc3f-98b5-4c50-af5f-46de007bd255\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" Feb 14 04:43:21 crc kubenswrapper[4867]: I0214 04:43:21.135173 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" Feb 14 04:43:21 crc kubenswrapper[4867]: I0214 04:43:21.679681 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf"] Feb 14 04:43:21 crc kubenswrapper[4867]: I0214 04:43:21.725397 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" event={"ID":"a716bc3f-98b5-4c50-af5f-46de007bd255","Type":"ContainerStarted","Data":"208a79f3cfc52aaff17abca229e10a8824ca713a1e3a5b62ea85e80419b33efa"} Feb 14 04:43:22 crc kubenswrapper[4867]: I0214 04:43:22.737742 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" event={"ID":"a716bc3f-98b5-4c50-af5f-46de007bd255","Type":"ContainerStarted","Data":"850da5d1fa200fe1da722734f617061d3c9bc463258327d71d858df718dab9e6"} Feb 14 04:43:22 crc kubenswrapper[4867]: I0214 04:43:22.761346 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" podStartSLOduration=2.329171393 podStartE2EDuration="2.761214264s" podCreationTimestamp="2026-02-14 04:43:20 +0000 UTC" firstStartedPulling="2026-02-14 04:43:21.684170466 +0000 UTC m=+2033.765107780" lastFinishedPulling="2026-02-14 04:43:22.116213337 +0000 UTC m=+2034.197150651" observedRunningTime="2026-02-14 04:43:22.754432295 +0000 UTC m=+2034.835369609" watchObservedRunningTime="2026-02-14 04:43:22.761214264 +0000 UTC m=+2034.842151598" Feb 14 04:43:31 crc kubenswrapper[4867]: I0214 04:43:31.251332 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:43:31 crc kubenswrapper[4867]: I0214 04:43:31.252096 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:43:43 crc kubenswrapper[4867]: I0214 04:43:43.060224 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vwg9c"] Feb 14 04:43:43 crc kubenswrapper[4867]: I0214 04:43:43.075961 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-vwg9c"] Feb 14 04:43:45 crc kubenswrapper[4867]: I0214 04:43:45.013520 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd08e0e3-a41f-4b25-b71a-1c968410d52e" path="/var/lib/kubelet/pods/cd08e0e3-a41f-4b25-b71a-1c968410d52e/volumes" Feb 14 04:43:49 crc kubenswrapper[4867]: I0214 04:43:49.050668 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-42f0-account-create-update-vx5cp"] Feb 14 04:43:49 crc kubenswrapper[4867]: I0214 04:43:49.068817 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-4dwll"] Feb 14 04:43:49 crc kubenswrapper[4867]: I0214 04:43:49.081572 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-4dwll"] Feb 14 04:43:49 crc kubenswrapper[4867]: I0214 04:43:49.091810 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-42f0-account-create-update-vx5cp"] Feb 14 04:43:51 crc kubenswrapper[4867]: I0214 04:43:51.014871 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="486bfb80-5589-4e9e-84d3-10726a066702" path="/var/lib/kubelet/pods/486bfb80-5589-4e9e-84d3-10726a066702/volumes" Feb 14 04:43:51 crc kubenswrapper[4867]: I0214 04:43:51.016972 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4aa569b6-1ec2-48e8-99c2-f165e5ea9604" path="/var/lib/kubelet/pods/4aa569b6-1ec2-48e8-99c2-f165e5ea9604/volumes" Feb 14 04:44:01 crc kubenswrapper[4867]: I0214 04:44:01.250902 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:44:01 crc kubenswrapper[4867]: I0214 04:44:01.251525 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:44:09 crc kubenswrapper[4867]: I0214 04:44:09.093575 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-8pszd"] Feb 14 04:44:09 crc kubenswrapper[4867]: I0214 04:44:09.124715 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-8pszd"] Feb 14 04:44:11 crc kubenswrapper[4867]: I0214 04:44:11.022010 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9947f337-0734-4b4e-bc31-e68e6354ed74" path="/var/lib/kubelet/pods/9947f337-0734-4b4e-bc31-e68e6354ed74/volumes" Feb 14 04:44:11 crc kubenswrapper[4867]: I0214 04:44:11.033745 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jw78d"] Feb 14 04:44:11 crc kubenswrapper[4867]: I0214 04:44:11.043249 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-jw78d"] Feb 14 04:44:13 crc kubenswrapper[4867]: I0214 04:44:13.020366 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bbf3a42-f012-4bed-a60e-1defcd0b1af9" path="/var/lib/kubelet/pods/2bbf3a42-f012-4bed-a60e-1defcd0b1af9/volumes" Feb 14 04:44:14 crc kubenswrapper[4867]: I0214 04:44:14.595552 4867 scope.go:117] "RemoveContainer" containerID="9434b7a5d62d84c5fafd89a974659be60c5965c5fe3ab11c7ca5ecbded575989" Feb 14 04:44:14 crc kubenswrapper[4867]: I0214 04:44:14.636633 4867 scope.go:117] "RemoveContainer" containerID="25d2bb0267b03452021a150ec90554f6e1f81995014c999f80f860ac88461b64" Feb 14 04:44:14 crc kubenswrapper[4867]: I0214 04:44:14.703108 4867 scope.go:117] "RemoveContainer" containerID="f354428129d549a2471d562380d7b2183b151280e2771b123ea6777b6dcf2c51" Feb 14 04:44:14 crc kubenswrapper[4867]: I0214 04:44:14.793744 4867 scope.go:117] "RemoveContainer" containerID="0f96994fd5725370a862ce87b1e8d08bfc4ff10235813b94e745a18d93f42f91" Feb 14 04:44:14 crc kubenswrapper[4867]: I0214 04:44:14.872748 4867 scope.go:117] "RemoveContainer" containerID="4c91a1eedf3612a0a64e4ffb88ac40594ed3abc921178439efbfe687de9b9c76" Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.251338 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.251957 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.252020 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.253157 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8ef22e983ed33de6916be45630c900d98abc980cea24a0e66ba99e9fbf263b65"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.253232 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://8ef22e983ed33de6916be45630c900d98abc980cea24a0e66ba99e9fbf263b65" gracePeriod=600 Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.589179 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r75vv"] Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.593243 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.602989 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r75vv"] Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.638357 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp7hp\" (UniqueName: \"kubernetes.io/projected/b5adcee9-1419-4c20-b96e-4886a1f19c68-kube-api-access-xp7hp\") pod \"community-operators-r75vv\" (UID: \"b5adcee9-1419-4c20-b96e-4886a1f19c68\") " pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.638451 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5adcee9-1419-4c20-b96e-4886a1f19c68-utilities\") pod \"community-operators-r75vv\" (UID: \"b5adcee9-1419-4c20-b96e-4886a1f19c68\") " pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.638556 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5adcee9-1419-4c20-b96e-4886a1f19c68-catalog-content\") pod \"community-operators-r75vv\" (UID: \"b5adcee9-1419-4c20-b96e-4886a1f19c68\") " pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.642696 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="8ef22e983ed33de6916be45630c900d98abc980cea24a0e66ba99e9fbf263b65" exitCode=0 Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.642734 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"8ef22e983ed33de6916be45630c900d98abc980cea24a0e66ba99e9fbf263b65"} Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.642766 4867 scope.go:117] "RemoveContainer" containerID="7203a3aa09f0fa634ee4bcd02b0e1dff1e29376e8dd84a4e743cbea72d4c480e" Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.741186 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5adcee9-1419-4c20-b96e-4886a1f19c68-catalog-content\") pod \"community-operators-r75vv\" (UID: \"b5adcee9-1419-4c20-b96e-4886a1f19c68\") " pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.741673 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5adcee9-1419-4c20-b96e-4886a1f19c68-catalog-content\") pod \"community-operators-r75vv\" (UID: \"b5adcee9-1419-4c20-b96e-4886a1f19c68\") " pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.741716 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp7hp\" (UniqueName: \"kubernetes.io/projected/b5adcee9-1419-4c20-b96e-4886a1f19c68-kube-api-access-xp7hp\") pod \"community-operators-r75vv\" (UID: \"b5adcee9-1419-4c20-b96e-4886a1f19c68\") " pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.741749 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5adcee9-1419-4c20-b96e-4886a1f19c68-utilities\") pod \"community-operators-r75vv\" (UID: \"b5adcee9-1419-4c20-b96e-4886a1f19c68\") " pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.742324 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5adcee9-1419-4c20-b96e-4886a1f19c68-utilities\") pod \"community-operators-r75vv\" (UID: \"b5adcee9-1419-4c20-b96e-4886a1f19c68\") " pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.766225 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp7hp\" (UniqueName: \"kubernetes.io/projected/b5adcee9-1419-4c20-b96e-4886a1f19c68-kube-api-access-xp7hp\") pod \"community-operators-r75vv\" (UID: \"b5adcee9-1419-4c20-b96e-4886a1f19c68\") " pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:31 crc kubenswrapper[4867]: I0214 04:44:31.926620 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:32 crc kubenswrapper[4867]: I0214 04:44:32.602057 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r75vv"] Feb 14 04:44:32 crc kubenswrapper[4867]: I0214 04:44:32.670433 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r75vv" event={"ID":"b5adcee9-1419-4c20-b96e-4886a1f19c68","Type":"ContainerStarted","Data":"429bdccd454e07224012faaaa97764f590a609292e1cea0ebe0e35d368f7b141"} Feb 14 04:44:32 crc kubenswrapper[4867]: I0214 04:44:32.674228 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2"} Feb 14 04:44:32 crc kubenswrapper[4867]: I0214 04:44:32.678389 4867 generic.go:334] "Generic (PLEG): container finished" podID="a716bc3f-98b5-4c50-af5f-46de007bd255" containerID="850da5d1fa200fe1da722734f617061d3c9bc463258327d71d858df718dab9e6" exitCode=0 Feb 14 04:44:32 crc kubenswrapper[4867]: I0214 04:44:32.678449 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" event={"ID":"a716bc3f-98b5-4c50-af5f-46de007bd255","Type":"ContainerDied","Data":"850da5d1fa200fe1da722734f617061d3c9bc463258327d71d858df718dab9e6"} Feb 14 04:44:33 crc kubenswrapper[4867]: I0214 04:44:33.692591 4867 generic.go:334] "Generic (PLEG): container finished" podID="b5adcee9-1419-4c20-b96e-4886a1f19c68" containerID="4bbc8658b79a62d3761a54fc5307fcfbd9755f7df3887332b937b52cb17b7449" exitCode=0 Feb 14 04:44:33 crc kubenswrapper[4867]: I0214 04:44:33.692670 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r75vv" event={"ID":"b5adcee9-1419-4c20-b96e-4886a1f19c68","Type":"ContainerDied","Data":"4bbc8658b79a62d3761a54fc5307fcfbd9755f7df3887332b937b52cb17b7449"} Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.187740 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.306291 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a716bc3f-98b5-4c50-af5f-46de007bd255-inventory\") pod \"a716bc3f-98b5-4c50-af5f-46de007bd255\" (UID: \"a716bc3f-98b5-4c50-af5f-46de007bd255\") " Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.306786 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a716bc3f-98b5-4c50-af5f-46de007bd255-ssh-key-openstack-edpm-ipam\") pod \"a716bc3f-98b5-4c50-af5f-46de007bd255\" (UID: \"a716bc3f-98b5-4c50-af5f-46de007bd255\") " Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.307027 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mc6s\" (UniqueName: \"kubernetes.io/projected/a716bc3f-98b5-4c50-af5f-46de007bd255-kube-api-access-9mc6s\") pod \"a716bc3f-98b5-4c50-af5f-46de007bd255\" (UID: \"a716bc3f-98b5-4c50-af5f-46de007bd255\") " Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.313037 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a716bc3f-98b5-4c50-af5f-46de007bd255-kube-api-access-9mc6s" (OuterVolumeSpecName: "kube-api-access-9mc6s") pod "a716bc3f-98b5-4c50-af5f-46de007bd255" (UID: "a716bc3f-98b5-4c50-af5f-46de007bd255"). InnerVolumeSpecName "kube-api-access-9mc6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.342213 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a716bc3f-98b5-4c50-af5f-46de007bd255-inventory" (OuterVolumeSpecName: "inventory") pod "a716bc3f-98b5-4c50-af5f-46de007bd255" (UID: "a716bc3f-98b5-4c50-af5f-46de007bd255"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.350855 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a716bc3f-98b5-4c50-af5f-46de007bd255-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a716bc3f-98b5-4c50-af5f-46de007bd255" (UID: "a716bc3f-98b5-4c50-af5f-46de007bd255"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.410264 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a716bc3f-98b5-4c50-af5f-46de007bd255-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.410301 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mc6s\" (UniqueName: \"kubernetes.io/projected/a716bc3f-98b5-4c50-af5f-46de007bd255-kube-api-access-9mc6s\") on node \"crc\" DevicePath \"\"" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.410312 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a716bc3f-98b5-4c50-af5f-46de007bd255-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.705350 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r75vv" event={"ID":"b5adcee9-1419-4c20-b96e-4886a1f19c68","Type":"ContainerStarted","Data":"10005da65e2c73639ff16fcacd7548293f56d416bfac9e18c035429ff03e132e"} Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.708529 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" event={"ID":"a716bc3f-98b5-4c50-af5f-46de007bd255","Type":"ContainerDied","Data":"208a79f3cfc52aaff17abca229e10a8824ca713a1e3a5b62ea85e80419b33efa"} Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.708576 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.708587 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="208a79f3cfc52aaff17abca229e10a8824ca713a1e3a5b62ea85e80419b33efa" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.882081 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns"] Feb 14 04:44:34 crc kubenswrapper[4867]: E0214 04:44:34.882817 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a716bc3f-98b5-4c50-af5f-46de007bd255" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.882867 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a716bc3f-98b5-4c50-af5f-46de007bd255" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.883220 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="a716bc3f-98b5-4c50-af5f-46de007bd255" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.884388 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.890313 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.890620 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.890814 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.890975 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:44:34 crc kubenswrapper[4867]: I0214 04:44:34.933140 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns"] Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.025739 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns\" (UID: \"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.025958 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns\" (UID: \"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.026213 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpk7l\" (UniqueName: \"kubernetes.io/projected/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-kube-api-access-mpk7l\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns\" (UID: \"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.128245 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns\" (UID: \"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.128361 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns\" (UID: \"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.128488 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpk7l\" (UniqueName: \"kubernetes.io/projected/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-kube-api-access-mpk7l\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns\" (UID: \"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.133165 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns\" (UID: \"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.133562 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns\" (UID: \"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.145904 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpk7l\" (UniqueName: \"kubernetes.io/projected/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-kube-api-access-mpk7l\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns\" (UID: \"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.178762 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5tmrm"] Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.182019 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.189993 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5tmrm"] Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.233224 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.333144 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-utilities\") pod \"redhat-operators-5tmrm\" (UID: \"f7288f7d-b1ef-4c2e-afab-abf0640eca5b\") " pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.333493 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-catalog-content\") pod \"redhat-operators-5tmrm\" (UID: \"f7288f7d-b1ef-4c2e-afab-abf0640eca5b\") " pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.333669 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpdx2\" (UniqueName: \"kubernetes.io/projected/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-kube-api-access-gpdx2\") pod \"redhat-operators-5tmrm\" (UID: \"f7288f7d-b1ef-4c2e-afab-abf0640eca5b\") " pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.439275 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-catalog-content\") pod \"redhat-operators-5tmrm\" (UID: \"f7288f7d-b1ef-4c2e-afab-abf0640eca5b\") " pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.439441 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpdx2\" (UniqueName: \"kubernetes.io/projected/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-kube-api-access-gpdx2\") pod \"redhat-operators-5tmrm\" (UID: \"f7288f7d-b1ef-4c2e-afab-abf0640eca5b\") " pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.439719 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-utilities\") pod \"redhat-operators-5tmrm\" (UID: \"f7288f7d-b1ef-4c2e-afab-abf0640eca5b\") " pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.440765 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-utilities\") pod \"redhat-operators-5tmrm\" (UID: \"f7288f7d-b1ef-4c2e-afab-abf0640eca5b\") " pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.441153 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-catalog-content\") pod \"redhat-operators-5tmrm\" (UID: \"f7288f7d-b1ef-4c2e-afab-abf0640eca5b\") " pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.470091 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpdx2\" (UniqueName: \"kubernetes.io/projected/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-kube-api-access-gpdx2\") pod \"redhat-operators-5tmrm\" (UID: \"f7288f7d-b1ef-4c2e-afab-abf0640eca5b\") " pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.677304 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:44:35 crc kubenswrapper[4867]: I0214 04:44:35.874906 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns"] Feb 14 04:44:36 crc kubenswrapper[4867]: I0214 04:44:36.252806 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5tmrm"] Feb 14 04:44:36 crc kubenswrapper[4867]: W0214 04:44:36.257313 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7288f7d_b1ef_4c2e_afab_abf0640eca5b.slice/crio-0d54b1c70e28e064450fb2d8570606b5e38f9337b5941836227df530cc9171aa WatchSource:0}: Error finding container 0d54b1c70e28e064450fb2d8570606b5e38f9337b5941836227df530cc9171aa: Status 404 returned error can't find the container with id 0d54b1c70e28e064450fb2d8570606b5e38f9337b5941836227df530cc9171aa Feb 14 04:44:36 crc kubenswrapper[4867]: I0214 04:44:36.737447 4867 generic.go:334] "Generic (PLEG): container finished" podID="b5adcee9-1419-4c20-b96e-4886a1f19c68" containerID="10005da65e2c73639ff16fcacd7548293f56d416bfac9e18c035429ff03e132e" exitCode=0 Feb 14 04:44:36 crc kubenswrapper[4867]: I0214 04:44:36.737865 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r75vv" event={"ID":"b5adcee9-1419-4c20-b96e-4886a1f19c68","Type":"ContainerDied","Data":"10005da65e2c73639ff16fcacd7548293f56d416bfac9e18c035429ff03e132e"} Feb 14 04:44:36 crc kubenswrapper[4867]: I0214 04:44:36.741766 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" event={"ID":"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be","Type":"ContainerStarted","Data":"608fe4d2e1e82ab95ee69da48f73cb0f32b952078e33f199d4f4180bbeaafdbc"} Feb 14 04:44:36 crc kubenswrapper[4867]: I0214 04:44:36.747860 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tmrm" event={"ID":"f7288f7d-b1ef-4c2e-afab-abf0640eca5b","Type":"ContainerStarted","Data":"b5a7e32df88ba8c060c472b7c45bf07342ae640287bab89509a991d77dd9e9ae"} Feb 14 04:44:36 crc kubenswrapper[4867]: I0214 04:44:36.747908 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tmrm" event={"ID":"f7288f7d-b1ef-4c2e-afab-abf0640eca5b","Type":"ContainerStarted","Data":"0d54b1c70e28e064450fb2d8570606b5e38f9337b5941836227df530cc9171aa"} Feb 14 04:44:37 crc kubenswrapper[4867]: I0214 04:44:37.778652 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r75vv" event={"ID":"b5adcee9-1419-4c20-b96e-4886a1f19c68","Type":"ContainerStarted","Data":"666c681872e1eab3e17aeafbe100cddf40a7eab0a3a2721a86433a8789ec0392"} Feb 14 04:44:37 crc kubenswrapper[4867]: I0214 04:44:37.781991 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" event={"ID":"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be","Type":"ContainerStarted","Data":"3815f58046638aaf7f2b843997ead59144a79e71583e19548be80d855ca3b469"} Feb 14 04:44:37 crc kubenswrapper[4867]: I0214 04:44:37.784973 4867 generic.go:334] "Generic (PLEG): container finished" podID="f7288f7d-b1ef-4c2e-afab-abf0640eca5b" containerID="b5a7e32df88ba8c060c472b7c45bf07342ae640287bab89509a991d77dd9e9ae" exitCode=0 Feb 14 04:44:37 crc kubenswrapper[4867]: I0214 04:44:37.785007 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tmrm" event={"ID":"f7288f7d-b1ef-4c2e-afab-abf0640eca5b","Type":"ContainerDied","Data":"b5a7e32df88ba8c060c472b7c45bf07342ae640287bab89509a991d77dd9e9ae"} Feb 14 04:44:37 crc kubenswrapper[4867]: I0214 04:44:37.816249 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r75vv" podStartSLOduration=3.223748605 podStartE2EDuration="6.816223212s" podCreationTimestamp="2026-02-14 04:44:31 +0000 UTC" firstStartedPulling="2026-02-14 04:44:33.696158823 +0000 UTC m=+2105.777096147" lastFinishedPulling="2026-02-14 04:44:37.28863344 +0000 UTC m=+2109.369570754" observedRunningTime="2026-02-14 04:44:37.799092282 +0000 UTC m=+2109.880029616" watchObservedRunningTime="2026-02-14 04:44:37.816223212 +0000 UTC m=+2109.897160526" Feb 14 04:44:37 crc kubenswrapper[4867]: I0214 04:44:37.855310 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" podStartSLOduration=3.362497501 podStartE2EDuration="3.855285468s" podCreationTimestamp="2026-02-14 04:44:34 +0000 UTC" firstStartedPulling="2026-02-14 04:44:35.922674221 +0000 UTC m=+2108.003611535" lastFinishedPulling="2026-02-14 04:44:36.415462188 +0000 UTC m=+2108.496399502" observedRunningTime="2026-02-14 04:44:37.837376507 +0000 UTC m=+2109.918313821" watchObservedRunningTime="2026-02-14 04:44:37.855285468 +0000 UTC m=+2109.936222782" Feb 14 04:44:38 crc kubenswrapper[4867]: I0214 04:44:38.797625 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tmrm" event={"ID":"f7288f7d-b1ef-4c2e-afab-abf0640eca5b","Type":"ContainerStarted","Data":"36e4894f5c0703edfdafd6fce0e06fa2efe687f65144ec11262a4f943fdda9c8"} Feb 14 04:44:41 crc kubenswrapper[4867]: I0214 04:44:41.927067 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:41 crc kubenswrapper[4867]: I0214 04:44:41.927751 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:41 crc kubenswrapper[4867]: I0214 04:44:41.994823 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:42 crc kubenswrapper[4867]: I0214 04:44:42.841682 4867 generic.go:334] "Generic (PLEG): container finished" podID="6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be" containerID="3815f58046638aaf7f2b843997ead59144a79e71583e19548be80d855ca3b469" exitCode=0 Feb 14 04:44:42 crc kubenswrapper[4867]: I0214 04:44:42.841766 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" event={"ID":"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be","Type":"ContainerDied","Data":"3815f58046638aaf7f2b843997ead59144a79e71583e19548be80d855ca3b469"} Feb 14 04:44:42 crc kubenswrapper[4867]: I0214 04:44:42.906805 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:43 crc kubenswrapper[4867]: I0214 04:44:43.770282 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r75vv"] Feb 14 04:44:43 crc kubenswrapper[4867]: I0214 04:44:43.853658 4867 generic.go:334] "Generic (PLEG): container finished" podID="f7288f7d-b1ef-4c2e-afab-abf0640eca5b" containerID="36e4894f5c0703edfdafd6fce0e06fa2efe687f65144ec11262a4f943fdda9c8" exitCode=0 Feb 14 04:44:43 crc kubenswrapper[4867]: I0214 04:44:43.853725 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tmrm" event={"ID":"f7288f7d-b1ef-4c2e-afab-abf0640eca5b","Type":"ContainerDied","Data":"36e4894f5c0703edfdafd6fce0e06fa2efe687f65144ec11262a4f943fdda9c8"} Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.606058 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.785676 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpk7l\" (UniqueName: \"kubernetes.io/projected/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-kube-api-access-mpk7l\") pod \"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be\" (UID: \"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be\") " Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.785907 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-ssh-key-openstack-edpm-ipam\") pod \"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be\" (UID: \"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be\") " Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.786046 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-inventory\") pod \"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be\" (UID: \"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be\") " Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.794893 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-kube-api-access-mpk7l" (OuterVolumeSpecName: "kube-api-access-mpk7l") pod "6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be" (UID: "6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be"). InnerVolumeSpecName "kube-api-access-mpk7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.829350 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be" (UID: "6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.832549 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-inventory" (OuterVolumeSpecName: "inventory") pod "6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be" (UID: "6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.866244 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tmrm" event={"ID":"f7288f7d-b1ef-4c2e-afab-abf0640eca5b","Type":"ContainerStarted","Data":"8734ecedd6ef520c39b963d953d5ba95466a58f89253815e3cfaf6003fdb92f9"} Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.870188 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" event={"ID":"6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be","Type":"ContainerDied","Data":"608fe4d2e1e82ab95ee69da48f73cb0f32b952078e33f199d4f4180bbeaafdbc"} Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.870342 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="608fe4d2e1e82ab95ee69da48f73cb0f32b952078e33f199d4f4180bbeaafdbc" Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.870256 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns" Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.870243 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r75vv" podUID="b5adcee9-1419-4c20-b96e-4886a1f19c68" containerName="registry-server" containerID="cri-o://666c681872e1eab3e17aeafbe100cddf40a7eab0a3a2721a86433a8789ec0392" gracePeriod=2 Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.889036 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpk7l\" (UniqueName: \"kubernetes.io/projected/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-kube-api-access-mpk7l\") on node \"crc\" DevicePath \"\"" Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.889087 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.889101 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.894622 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5tmrm" podStartSLOduration=3.415196339 podStartE2EDuration="9.894601515s" podCreationTimestamp="2026-02-14 04:44:35 +0000 UTC" firstStartedPulling="2026-02-14 04:44:37.788452312 +0000 UTC m=+2109.869389626" lastFinishedPulling="2026-02-14 04:44:44.267857488 +0000 UTC m=+2116.348794802" observedRunningTime="2026-02-14 04:44:44.891861053 +0000 UTC m=+2116.972798367" watchObservedRunningTime="2026-02-14 04:44:44.894601515 +0000 UTC m=+2116.975538829" Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.971172 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw"] Feb 14 04:44:44 crc kubenswrapper[4867]: E0214 04:44:44.992030 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.992076 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 14 04:44:44 crc kubenswrapper[4867]: I0214 04:44:44.993128 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:44.999985 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw"] Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.005233 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.009969 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.012400 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.012586 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.014523 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.097579 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c22xw\" (UID: \"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.098077 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nccx2\" (UniqueName: \"kubernetes.io/projected/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-kube-api-access-nccx2\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c22xw\" (UID: \"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.098282 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c22xw\" (UID: \"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.201679 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c22xw\" (UID: \"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.201889 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c22xw\" (UID: \"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.202003 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nccx2\" (UniqueName: \"kubernetes.io/projected/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-kube-api-access-nccx2\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c22xw\" (UID: \"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.209879 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c22xw\" (UID: \"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.211143 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c22xw\" (UID: \"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.234044 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nccx2\" (UniqueName: \"kubernetes.io/projected/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-kube-api-access-nccx2\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-c22xw\" (UID: \"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.280227 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.338852 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.405230 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5adcee9-1419-4c20-b96e-4886a1f19c68-utilities\") pod \"b5adcee9-1419-4c20-b96e-4886a1f19c68\" (UID: \"b5adcee9-1419-4c20-b96e-4886a1f19c68\") " Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.405363 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xp7hp\" (UniqueName: \"kubernetes.io/projected/b5adcee9-1419-4c20-b96e-4886a1f19c68-kube-api-access-xp7hp\") pod \"b5adcee9-1419-4c20-b96e-4886a1f19c68\" (UID: \"b5adcee9-1419-4c20-b96e-4886a1f19c68\") " Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.405468 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5adcee9-1419-4c20-b96e-4886a1f19c68-catalog-content\") pod \"b5adcee9-1419-4c20-b96e-4886a1f19c68\" (UID: \"b5adcee9-1419-4c20-b96e-4886a1f19c68\") " Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.409055 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5adcee9-1419-4c20-b96e-4886a1f19c68-utilities" (OuterVolumeSpecName: "utilities") pod "b5adcee9-1419-4c20-b96e-4886a1f19c68" (UID: "b5adcee9-1419-4c20-b96e-4886a1f19c68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.410033 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5adcee9-1419-4c20-b96e-4886a1f19c68-kube-api-access-xp7hp" (OuterVolumeSpecName: "kube-api-access-xp7hp") pod "b5adcee9-1419-4c20-b96e-4886a1f19c68" (UID: "b5adcee9-1419-4c20-b96e-4886a1f19c68"). InnerVolumeSpecName "kube-api-access-xp7hp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.458421 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5adcee9-1419-4c20-b96e-4886a1f19c68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5adcee9-1419-4c20-b96e-4886a1f19c68" (UID: "b5adcee9-1419-4c20-b96e-4886a1f19c68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.508057 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5adcee9-1419-4c20-b96e-4886a1f19c68-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.508380 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xp7hp\" (UniqueName: \"kubernetes.io/projected/b5adcee9-1419-4c20-b96e-4886a1f19c68-kube-api-access-xp7hp\") on node \"crc\" DevicePath \"\"" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.508393 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5adcee9-1419-4c20-b96e-4886a1f19c68-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.677781 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.677990 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.883477 4867 generic.go:334] "Generic (PLEG): container finished" podID="b5adcee9-1419-4c20-b96e-4886a1f19c68" containerID="666c681872e1eab3e17aeafbe100cddf40a7eab0a3a2721a86433a8789ec0392" exitCode=0 Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.884609 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r75vv" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.886870 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r75vv" event={"ID":"b5adcee9-1419-4c20-b96e-4886a1f19c68","Type":"ContainerDied","Data":"666c681872e1eab3e17aeafbe100cddf40a7eab0a3a2721a86433a8789ec0392"} Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.886938 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r75vv" event={"ID":"b5adcee9-1419-4c20-b96e-4886a1f19c68","Type":"ContainerDied","Data":"429bdccd454e07224012faaaa97764f590a609292e1cea0ebe0e35d368f7b141"} Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.886960 4867 scope.go:117] "RemoveContainer" containerID="666c681872e1eab3e17aeafbe100cddf40a7eab0a3a2721a86433a8789ec0392" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.923300 4867 scope.go:117] "RemoveContainer" containerID="10005da65e2c73639ff16fcacd7548293f56d416bfac9e18c035429ff03e132e" Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.930864 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r75vv"] Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.948694 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r75vv"] Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.965158 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw"] Feb 14 04:44:45 crc kubenswrapper[4867]: W0214 04:44:45.966172 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b6f69a7_8ea6_48ad_aa0c_bd11b1efef10.slice/crio-0ce552b1f72639eb3f74a2e6671f112bab5f516045d9b65f7a60c6a824ab8dac WatchSource:0}: Error finding container 0ce552b1f72639eb3f74a2e6671f112bab5f516045d9b65f7a60c6a824ab8dac: Status 404 returned error can't find the container with id 0ce552b1f72639eb3f74a2e6671f112bab5f516045d9b65f7a60c6a824ab8dac Feb 14 04:44:45 crc kubenswrapper[4867]: I0214 04:44:45.972775 4867 scope.go:117] "RemoveContainer" containerID="4bbc8658b79a62d3761a54fc5307fcfbd9755f7df3887332b937b52cb17b7449" Feb 14 04:44:46 crc kubenswrapper[4867]: I0214 04:44:46.083721 4867 scope.go:117] "RemoveContainer" containerID="666c681872e1eab3e17aeafbe100cddf40a7eab0a3a2721a86433a8789ec0392" Feb 14 04:44:46 crc kubenswrapper[4867]: E0214 04:44:46.084284 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"666c681872e1eab3e17aeafbe100cddf40a7eab0a3a2721a86433a8789ec0392\": container with ID starting with 666c681872e1eab3e17aeafbe100cddf40a7eab0a3a2721a86433a8789ec0392 not found: ID does not exist" containerID="666c681872e1eab3e17aeafbe100cddf40a7eab0a3a2721a86433a8789ec0392" Feb 14 04:44:46 crc kubenswrapper[4867]: I0214 04:44:46.084318 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"666c681872e1eab3e17aeafbe100cddf40a7eab0a3a2721a86433a8789ec0392"} err="failed to get container status \"666c681872e1eab3e17aeafbe100cddf40a7eab0a3a2721a86433a8789ec0392\": rpc error: code = NotFound desc = could not find container \"666c681872e1eab3e17aeafbe100cddf40a7eab0a3a2721a86433a8789ec0392\": container with ID starting with 666c681872e1eab3e17aeafbe100cddf40a7eab0a3a2721a86433a8789ec0392 not found: ID does not exist" Feb 14 04:44:46 crc kubenswrapper[4867]: I0214 04:44:46.084338 4867 scope.go:117] "RemoveContainer" containerID="10005da65e2c73639ff16fcacd7548293f56d416bfac9e18c035429ff03e132e" Feb 14 04:44:46 crc kubenswrapper[4867]: E0214 04:44:46.084769 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10005da65e2c73639ff16fcacd7548293f56d416bfac9e18c035429ff03e132e\": container with ID starting with 10005da65e2c73639ff16fcacd7548293f56d416bfac9e18c035429ff03e132e not found: ID does not exist" containerID="10005da65e2c73639ff16fcacd7548293f56d416bfac9e18c035429ff03e132e" Feb 14 04:44:46 crc kubenswrapper[4867]: I0214 04:44:46.084789 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10005da65e2c73639ff16fcacd7548293f56d416bfac9e18c035429ff03e132e"} err="failed to get container status \"10005da65e2c73639ff16fcacd7548293f56d416bfac9e18c035429ff03e132e\": rpc error: code = NotFound desc = could not find container \"10005da65e2c73639ff16fcacd7548293f56d416bfac9e18c035429ff03e132e\": container with ID starting with 10005da65e2c73639ff16fcacd7548293f56d416bfac9e18c035429ff03e132e not found: ID does not exist" Feb 14 04:44:46 crc kubenswrapper[4867]: I0214 04:44:46.084802 4867 scope.go:117] "RemoveContainer" containerID="4bbc8658b79a62d3761a54fc5307fcfbd9755f7df3887332b937b52cb17b7449" Feb 14 04:44:46 crc kubenswrapper[4867]: E0214 04:44:46.085104 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bbc8658b79a62d3761a54fc5307fcfbd9755f7df3887332b937b52cb17b7449\": container with ID starting with 4bbc8658b79a62d3761a54fc5307fcfbd9755f7df3887332b937b52cb17b7449 not found: ID does not exist" containerID="4bbc8658b79a62d3761a54fc5307fcfbd9755f7df3887332b937b52cb17b7449" Feb 14 04:44:46 crc kubenswrapper[4867]: I0214 04:44:46.085122 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bbc8658b79a62d3761a54fc5307fcfbd9755f7df3887332b937b52cb17b7449"} err="failed to get container status \"4bbc8658b79a62d3761a54fc5307fcfbd9755f7df3887332b937b52cb17b7449\": rpc error: code = NotFound desc = could not find container \"4bbc8658b79a62d3761a54fc5307fcfbd9755f7df3887332b937b52cb17b7449\": container with ID starting with 4bbc8658b79a62d3761a54fc5307fcfbd9755f7df3887332b937b52cb17b7449 not found: ID does not exist" Feb 14 04:44:46 crc kubenswrapper[4867]: I0214 04:44:46.735481 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5tmrm" podUID="f7288f7d-b1ef-4c2e-afab-abf0640eca5b" containerName="registry-server" probeResult="failure" output=< Feb 14 04:44:46 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 04:44:46 crc kubenswrapper[4867]: > Feb 14 04:44:46 crc kubenswrapper[4867]: I0214 04:44:46.907490 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" event={"ID":"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10","Type":"ContainerStarted","Data":"4cb72980b5b9bee8ac466efa1a7b02120564eee883847b51bd4f9469ad29807a"} Feb 14 04:44:46 crc kubenswrapper[4867]: I0214 04:44:46.907574 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" event={"ID":"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10","Type":"ContainerStarted","Data":"0ce552b1f72639eb3f74a2e6671f112bab5f516045d9b65f7a60c6a824ab8dac"} Feb 14 04:44:46 crc kubenswrapper[4867]: I0214 04:44:46.936973 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" podStartSLOduration=2.5361594739999997 podStartE2EDuration="2.936951694s" podCreationTimestamp="2026-02-14 04:44:44 +0000 UTC" firstStartedPulling="2026-02-14 04:44:45.97346592 +0000 UTC m=+2118.054403234" lastFinishedPulling="2026-02-14 04:44:46.37425814 +0000 UTC m=+2118.455195454" observedRunningTime="2026-02-14 04:44:46.928236205 +0000 UTC m=+2119.009173519" watchObservedRunningTime="2026-02-14 04:44:46.936951694 +0000 UTC m=+2119.017889008" Feb 14 04:44:47 crc kubenswrapper[4867]: I0214 04:44:47.014300 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5adcee9-1419-4c20-b96e-4886a1f19c68" path="/var/lib/kubelet/pods/b5adcee9-1419-4c20-b96e-4886a1f19c68/volumes" Feb 14 04:44:56 crc kubenswrapper[4867]: I0214 04:44:56.729328 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5tmrm" podUID="f7288f7d-b1ef-4c2e-afab-abf0640eca5b" containerName="registry-server" probeResult="failure" output=< Feb 14 04:44:56 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 04:44:56 crc kubenswrapper[4867]: > Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.047032 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-k2ls7"] Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.063954 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-k2ls7"] Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.150247 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc"] Feb 14 04:45:00 crc kubenswrapper[4867]: E0214 04:45:00.150824 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5adcee9-1419-4c20-b96e-4886a1f19c68" containerName="registry-server" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.150843 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5adcee9-1419-4c20-b96e-4886a1f19c68" containerName="registry-server" Feb 14 04:45:00 crc kubenswrapper[4867]: E0214 04:45:00.150851 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5adcee9-1419-4c20-b96e-4886a1f19c68" containerName="extract-content" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.150858 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5adcee9-1419-4c20-b96e-4886a1f19c68" containerName="extract-content" Feb 14 04:45:00 crc kubenswrapper[4867]: E0214 04:45:00.150916 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5adcee9-1419-4c20-b96e-4886a1f19c68" containerName="extract-utilities" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.150923 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5adcee9-1419-4c20-b96e-4886a1f19c68" containerName="extract-utilities" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.151154 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5adcee9-1419-4c20-b96e-4886a1f19c68" containerName="registry-server" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.152121 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.156028 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.161228 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.162809 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc"] Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.168766 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9309a87-899d-49c2-885b-9d5689c3086b-secret-volume\") pod \"collect-profiles-29517405-57nzc\" (UID: \"c9309a87-899d-49c2-885b-9d5689c3086b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.169005 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnxl9\" (UniqueName: \"kubernetes.io/projected/c9309a87-899d-49c2-885b-9d5689c3086b-kube-api-access-jnxl9\") pod \"collect-profiles-29517405-57nzc\" (UID: \"c9309a87-899d-49c2-885b-9d5689c3086b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.169108 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9309a87-899d-49c2-885b-9d5689c3086b-config-volume\") pod \"collect-profiles-29517405-57nzc\" (UID: \"c9309a87-899d-49c2-885b-9d5689c3086b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.271317 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnxl9\" (UniqueName: \"kubernetes.io/projected/c9309a87-899d-49c2-885b-9d5689c3086b-kube-api-access-jnxl9\") pod \"collect-profiles-29517405-57nzc\" (UID: \"c9309a87-899d-49c2-885b-9d5689c3086b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.271415 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9309a87-899d-49c2-885b-9d5689c3086b-config-volume\") pod \"collect-profiles-29517405-57nzc\" (UID: \"c9309a87-899d-49c2-885b-9d5689c3086b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.271481 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9309a87-899d-49c2-885b-9d5689c3086b-secret-volume\") pod \"collect-profiles-29517405-57nzc\" (UID: \"c9309a87-899d-49c2-885b-9d5689c3086b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.272628 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9309a87-899d-49c2-885b-9d5689c3086b-config-volume\") pod \"collect-profiles-29517405-57nzc\" (UID: \"c9309a87-899d-49c2-885b-9d5689c3086b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.277461 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9309a87-899d-49c2-885b-9d5689c3086b-secret-volume\") pod \"collect-profiles-29517405-57nzc\" (UID: \"c9309a87-899d-49c2-885b-9d5689c3086b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.289479 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnxl9\" (UniqueName: \"kubernetes.io/projected/c9309a87-899d-49c2-885b-9d5689c3086b-kube-api-access-jnxl9\") pod \"collect-profiles-29517405-57nzc\" (UID: \"c9309a87-899d-49c2-885b-9d5689c3086b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.483758 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc" Feb 14 04:45:00 crc kubenswrapper[4867]: I0214 04:45:00.964955 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc"] Feb 14 04:45:01 crc kubenswrapper[4867]: I0214 04:45:01.036318 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4be79f3c-fa78-40d2-9ad9-d1dfd965c831" path="/var/lib/kubelet/pods/4be79f3c-fa78-40d2-9ad9-d1dfd965c831/volumes" Feb 14 04:45:01 crc kubenswrapper[4867]: I0214 04:45:01.089910 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc" event={"ID":"c9309a87-899d-49c2-885b-9d5689c3086b","Type":"ContainerStarted","Data":"d0a6d3a10289bed0a3a52adf3fa173eea292037db6396740729ea2564654297f"} Feb 14 04:45:02 crc kubenswrapper[4867]: I0214 04:45:02.124498 4867 generic.go:334] "Generic (PLEG): container finished" podID="c9309a87-899d-49c2-885b-9d5689c3086b" containerID="ab4ee5d7ccbbb8ee4ad53cb2ebd2a425cf55cf8aed22876c6ecd5b2b84a7972a" exitCode=0 Feb 14 04:45:02 crc kubenswrapper[4867]: I0214 04:45:02.124633 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc" event={"ID":"c9309a87-899d-49c2-885b-9d5689c3086b","Type":"ContainerDied","Data":"ab4ee5d7ccbbb8ee4ad53cb2ebd2a425cf55cf8aed22876c6ecd5b2b84a7972a"} Feb 14 04:45:03 crc kubenswrapper[4867]: I0214 04:45:03.661670 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc" Feb 14 04:45:03 crc kubenswrapper[4867]: I0214 04:45:03.665357 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9309a87-899d-49c2-885b-9d5689c3086b-secret-volume\") pod \"c9309a87-899d-49c2-885b-9d5689c3086b\" (UID: \"c9309a87-899d-49c2-885b-9d5689c3086b\") " Feb 14 04:45:03 crc kubenswrapper[4867]: I0214 04:45:03.665487 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnxl9\" (UniqueName: \"kubernetes.io/projected/c9309a87-899d-49c2-885b-9d5689c3086b-kube-api-access-jnxl9\") pod \"c9309a87-899d-49c2-885b-9d5689c3086b\" (UID: \"c9309a87-899d-49c2-885b-9d5689c3086b\") " Feb 14 04:45:03 crc kubenswrapper[4867]: I0214 04:45:03.665847 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9309a87-899d-49c2-885b-9d5689c3086b-config-volume\") pod \"c9309a87-899d-49c2-885b-9d5689c3086b\" (UID: \"c9309a87-899d-49c2-885b-9d5689c3086b\") " Feb 14 04:45:03 crc kubenswrapper[4867]: I0214 04:45:03.667057 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9309a87-899d-49c2-885b-9d5689c3086b-config-volume" (OuterVolumeSpecName: "config-volume") pod "c9309a87-899d-49c2-885b-9d5689c3086b" (UID: "c9309a87-899d-49c2-885b-9d5689c3086b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:45:03 crc kubenswrapper[4867]: I0214 04:45:03.671556 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9309a87-899d-49c2-885b-9d5689c3086b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c9309a87-899d-49c2-885b-9d5689c3086b" (UID: "c9309a87-899d-49c2-885b-9d5689c3086b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:45:03 crc kubenswrapper[4867]: I0214 04:45:03.679192 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9309a87-899d-49c2-885b-9d5689c3086b-kube-api-access-jnxl9" (OuterVolumeSpecName: "kube-api-access-jnxl9") pod "c9309a87-899d-49c2-885b-9d5689c3086b" (UID: "c9309a87-899d-49c2-885b-9d5689c3086b"). InnerVolumeSpecName "kube-api-access-jnxl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:45:03 crc kubenswrapper[4867]: I0214 04:45:03.769597 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnxl9\" (UniqueName: \"kubernetes.io/projected/c9309a87-899d-49c2-885b-9d5689c3086b-kube-api-access-jnxl9\") on node \"crc\" DevicePath \"\"" Feb 14 04:45:03 crc kubenswrapper[4867]: I0214 04:45:03.769629 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9309a87-899d-49c2-885b-9d5689c3086b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 04:45:03 crc kubenswrapper[4867]: I0214 04:45:03.769638 4867 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9309a87-899d-49c2-885b-9d5689c3086b-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 04:45:04 crc kubenswrapper[4867]: I0214 04:45:04.149725 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc" event={"ID":"c9309a87-899d-49c2-885b-9d5689c3086b","Type":"ContainerDied","Data":"d0a6d3a10289bed0a3a52adf3fa173eea292037db6396740729ea2564654297f"} Feb 14 04:45:04 crc kubenswrapper[4867]: I0214 04:45:04.149773 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0a6d3a10289bed0a3a52adf3fa173eea292037db6396740729ea2564654297f" Feb 14 04:45:04 crc kubenswrapper[4867]: I0214 04:45:04.149847 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc" Feb 14 04:45:04 crc kubenswrapper[4867]: I0214 04:45:04.738082 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd"] Feb 14 04:45:04 crc kubenswrapper[4867]: I0214 04:45:04.753115 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517360-jfvsd"] Feb 14 04:45:05 crc kubenswrapper[4867]: I0214 04:45:05.013887 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71ac31c5-7a3b-4c18-aa9e-c193fa8f778a" path="/var/lib/kubelet/pods/71ac31c5-7a3b-4c18-aa9e-c193fa8f778a/volumes" Feb 14 04:45:05 crc kubenswrapper[4867]: I0214 04:45:05.729109 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:45:05 crc kubenswrapper[4867]: I0214 04:45:05.781843 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:45:06 crc kubenswrapper[4867]: I0214 04:45:06.382073 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5tmrm"] Feb 14 04:45:07 crc kubenswrapper[4867]: I0214 04:45:07.192781 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5tmrm" podUID="f7288f7d-b1ef-4c2e-afab-abf0640eca5b" containerName="registry-server" containerID="cri-o://8734ecedd6ef520c39b963d953d5ba95466a58f89253815e3cfaf6003fdb92f9" gracePeriod=2 Feb 14 04:45:07 crc kubenswrapper[4867]: I0214 04:45:07.660844 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:45:07 crc kubenswrapper[4867]: I0214 04:45:07.782845 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpdx2\" (UniqueName: \"kubernetes.io/projected/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-kube-api-access-gpdx2\") pod \"f7288f7d-b1ef-4c2e-afab-abf0640eca5b\" (UID: \"f7288f7d-b1ef-4c2e-afab-abf0640eca5b\") " Feb 14 04:45:07 crc kubenswrapper[4867]: I0214 04:45:07.782927 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-catalog-content\") pod \"f7288f7d-b1ef-4c2e-afab-abf0640eca5b\" (UID: \"f7288f7d-b1ef-4c2e-afab-abf0640eca5b\") " Feb 14 04:45:07 crc kubenswrapper[4867]: I0214 04:45:07.783093 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-utilities\") pod \"f7288f7d-b1ef-4c2e-afab-abf0640eca5b\" (UID: \"f7288f7d-b1ef-4c2e-afab-abf0640eca5b\") " Feb 14 04:45:07 crc kubenswrapper[4867]: I0214 04:45:07.783848 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-utilities" (OuterVolumeSpecName: "utilities") pod "f7288f7d-b1ef-4c2e-afab-abf0640eca5b" (UID: "f7288f7d-b1ef-4c2e-afab-abf0640eca5b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:45:07 crc kubenswrapper[4867]: I0214 04:45:07.784705 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:45:07 crc kubenswrapper[4867]: I0214 04:45:07.794283 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-kube-api-access-gpdx2" (OuterVolumeSpecName: "kube-api-access-gpdx2") pod "f7288f7d-b1ef-4c2e-afab-abf0640eca5b" (UID: "f7288f7d-b1ef-4c2e-afab-abf0640eca5b"). InnerVolumeSpecName "kube-api-access-gpdx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:45:07 crc kubenswrapper[4867]: I0214 04:45:07.886390 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpdx2\" (UniqueName: \"kubernetes.io/projected/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-kube-api-access-gpdx2\") on node \"crc\" DevicePath \"\"" Feb 14 04:45:07 crc kubenswrapper[4867]: I0214 04:45:07.926170 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7288f7d-b1ef-4c2e-afab-abf0640eca5b" (UID: "f7288f7d-b1ef-4c2e-afab-abf0640eca5b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:45:07 crc kubenswrapper[4867]: I0214 04:45:07.989635 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7288f7d-b1ef-4c2e-afab-abf0640eca5b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:45:08 crc kubenswrapper[4867]: I0214 04:45:08.205323 4867 generic.go:334] "Generic (PLEG): container finished" podID="f7288f7d-b1ef-4c2e-afab-abf0640eca5b" containerID="8734ecedd6ef520c39b963d953d5ba95466a58f89253815e3cfaf6003fdb92f9" exitCode=0 Feb 14 04:45:08 crc kubenswrapper[4867]: I0214 04:45:08.206285 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tmrm" event={"ID":"f7288f7d-b1ef-4c2e-afab-abf0640eca5b","Type":"ContainerDied","Data":"8734ecedd6ef520c39b963d953d5ba95466a58f89253815e3cfaf6003fdb92f9"} Feb 14 04:45:08 crc kubenswrapper[4867]: I0214 04:45:08.206368 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5tmrm" event={"ID":"f7288f7d-b1ef-4c2e-afab-abf0640eca5b","Type":"ContainerDied","Data":"0d54b1c70e28e064450fb2d8570606b5e38f9337b5941836227df530cc9171aa"} Feb 14 04:45:08 crc kubenswrapper[4867]: I0214 04:45:08.206459 4867 scope.go:117] "RemoveContainer" containerID="8734ecedd6ef520c39b963d953d5ba95466a58f89253815e3cfaf6003fdb92f9" Feb 14 04:45:08 crc kubenswrapper[4867]: I0214 04:45:08.206738 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5tmrm" Feb 14 04:45:08 crc kubenswrapper[4867]: I0214 04:45:08.257067 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5tmrm"] Feb 14 04:45:08 crc kubenswrapper[4867]: I0214 04:45:08.259101 4867 scope.go:117] "RemoveContainer" containerID="36e4894f5c0703edfdafd6fce0e06fa2efe687f65144ec11262a4f943fdda9c8" Feb 14 04:45:08 crc kubenswrapper[4867]: I0214 04:45:08.272243 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5tmrm"] Feb 14 04:45:08 crc kubenswrapper[4867]: I0214 04:45:08.282789 4867 scope.go:117] "RemoveContainer" containerID="b5a7e32df88ba8c060c472b7c45bf07342ae640287bab89509a991d77dd9e9ae" Feb 14 04:45:08 crc kubenswrapper[4867]: I0214 04:45:08.331065 4867 scope.go:117] "RemoveContainer" containerID="8734ecedd6ef520c39b963d953d5ba95466a58f89253815e3cfaf6003fdb92f9" Feb 14 04:45:08 crc kubenswrapper[4867]: E0214 04:45:08.331711 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8734ecedd6ef520c39b963d953d5ba95466a58f89253815e3cfaf6003fdb92f9\": container with ID starting with 8734ecedd6ef520c39b963d953d5ba95466a58f89253815e3cfaf6003fdb92f9 not found: ID does not exist" containerID="8734ecedd6ef520c39b963d953d5ba95466a58f89253815e3cfaf6003fdb92f9" Feb 14 04:45:08 crc kubenswrapper[4867]: I0214 04:45:08.331778 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8734ecedd6ef520c39b963d953d5ba95466a58f89253815e3cfaf6003fdb92f9"} err="failed to get container status \"8734ecedd6ef520c39b963d953d5ba95466a58f89253815e3cfaf6003fdb92f9\": rpc error: code = NotFound desc = could not find container \"8734ecedd6ef520c39b963d953d5ba95466a58f89253815e3cfaf6003fdb92f9\": container with ID starting with 8734ecedd6ef520c39b963d953d5ba95466a58f89253815e3cfaf6003fdb92f9 not found: ID does not exist" Feb 14 04:45:08 crc kubenswrapper[4867]: I0214 04:45:08.331810 4867 scope.go:117] "RemoveContainer" containerID="36e4894f5c0703edfdafd6fce0e06fa2efe687f65144ec11262a4f943fdda9c8" Feb 14 04:45:08 crc kubenswrapper[4867]: E0214 04:45:08.332248 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36e4894f5c0703edfdafd6fce0e06fa2efe687f65144ec11262a4f943fdda9c8\": container with ID starting with 36e4894f5c0703edfdafd6fce0e06fa2efe687f65144ec11262a4f943fdda9c8 not found: ID does not exist" containerID="36e4894f5c0703edfdafd6fce0e06fa2efe687f65144ec11262a4f943fdda9c8" Feb 14 04:45:08 crc kubenswrapper[4867]: I0214 04:45:08.332335 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36e4894f5c0703edfdafd6fce0e06fa2efe687f65144ec11262a4f943fdda9c8"} err="failed to get container status \"36e4894f5c0703edfdafd6fce0e06fa2efe687f65144ec11262a4f943fdda9c8\": rpc error: code = NotFound desc = could not find container \"36e4894f5c0703edfdafd6fce0e06fa2efe687f65144ec11262a4f943fdda9c8\": container with ID starting with 36e4894f5c0703edfdafd6fce0e06fa2efe687f65144ec11262a4f943fdda9c8 not found: ID does not exist" Feb 14 04:45:08 crc kubenswrapper[4867]: I0214 04:45:08.332411 4867 scope.go:117] "RemoveContainer" containerID="b5a7e32df88ba8c060c472b7c45bf07342ae640287bab89509a991d77dd9e9ae" Feb 14 04:45:08 crc kubenswrapper[4867]: E0214 04:45:08.332747 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5a7e32df88ba8c060c472b7c45bf07342ae640287bab89509a991d77dd9e9ae\": container with ID starting with b5a7e32df88ba8c060c472b7c45bf07342ae640287bab89509a991d77dd9e9ae not found: ID does not exist" containerID="b5a7e32df88ba8c060c472b7c45bf07342ae640287bab89509a991d77dd9e9ae" Feb 14 04:45:08 crc kubenswrapper[4867]: I0214 04:45:08.332773 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5a7e32df88ba8c060c472b7c45bf07342ae640287bab89509a991d77dd9e9ae"} err="failed to get container status \"b5a7e32df88ba8c060c472b7c45bf07342ae640287bab89509a991d77dd9e9ae\": rpc error: code = NotFound desc = could not find container \"b5a7e32df88ba8c060c472b7c45bf07342ae640287bab89509a991d77dd9e9ae\": container with ID starting with b5a7e32df88ba8c060c472b7c45bf07342ae640287bab89509a991d77dd9e9ae not found: ID does not exist" Feb 14 04:45:09 crc kubenswrapper[4867]: I0214 04:45:09.011094 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7288f7d-b1ef-4c2e-afab-abf0640eca5b" path="/var/lib/kubelet/pods/f7288f7d-b1ef-4c2e-afab-abf0640eca5b/volumes" Feb 14 04:45:12 crc kubenswrapper[4867]: I0214 04:45:12.993879 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rkbk8"] Feb 14 04:45:12 crc kubenswrapper[4867]: E0214 04:45:12.995137 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7288f7d-b1ef-4c2e-afab-abf0640eca5b" containerName="extract-utilities" Feb 14 04:45:12 crc kubenswrapper[4867]: I0214 04:45:12.995160 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7288f7d-b1ef-4c2e-afab-abf0640eca5b" containerName="extract-utilities" Feb 14 04:45:12 crc kubenswrapper[4867]: E0214 04:45:12.995187 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9309a87-899d-49c2-885b-9d5689c3086b" containerName="collect-profiles" Feb 14 04:45:12 crc kubenswrapper[4867]: I0214 04:45:12.995196 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9309a87-899d-49c2-885b-9d5689c3086b" containerName="collect-profiles" Feb 14 04:45:12 crc kubenswrapper[4867]: E0214 04:45:12.995223 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7288f7d-b1ef-4c2e-afab-abf0640eca5b" containerName="registry-server" Feb 14 04:45:12 crc kubenswrapper[4867]: I0214 04:45:12.995233 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7288f7d-b1ef-4c2e-afab-abf0640eca5b" containerName="registry-server" Feb 14 04:45:12 crc kubenswrapper[4867]: E0214 04:45:12.995269 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7288f7d-b1ef-4c2e-afab-abf0640eca5b" containerName="extract-content" Feb 14 04:45:12 crc kubenswrapper[4867]: I0214 04:45:12.995277 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7288f7d-b1ef-4c2e-afab-abf0640eca5b" containerName="extract-content" Feb 14 04:45:12 crc kubenswrapper[4867]: I0214 04:45:12.995541 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9309a87-899d-49c2-885b-9d5689c3086b" containerName="collect-profiles" Feb 14 04:45:12 crc kubenswrapper[4867]: I0214 04:45:12.995579 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7288f7d-b1ef-4c2e-afab-abf0640eca5b" containerName="registry-server" Feb 14 04:45:12 crc kubenswrapper[4867]: I0214 04:45:12.998328 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:13 crc kubenswrapper[4867]: I0214 04:45:13.016626 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rkbk8"] Feb 14 04:45:13 crc kubenswrapper[4867]: I0214 04:45:13.110985 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx5v6\" (UniqueName: \"kubernetes.io/projected/95abd277-f40d-4636-8270-ff2346c0c30e-kube-api-access-xx5v6\") pod \"redhat-marketplace-rkbk8\" (UID: \"95abd277-f40d-4636-8270-ff2346c0c30e\") " pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:13 crc kubenswrapper[4867]: I0214 04:45:13.111280 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95abd277-f40d-4636-8270-ff2346c0c30e-utilities\") pod \"redhat-marketplace-rkbk8\" (UID: \"95abd277-f40d-4636-8270-ff2346c0c30e\") " pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:13 crc kubenswrapper[4867]: I0214 04:45:13.111454 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95abd277-f40d-4636-8270-ff2346c0c30e-catalog-content\") pod \"redhat-marketplace-rkbk8\" (UID: \"95abd277-f40d-4636-8270-ff2346c0c30e\") " pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:13 crc kubenswrapper[4867]: I0214 04:45:13.214152 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95abd277-f40d-4636-8270-ff2346c0c30e-catalog-content\") pod \"redhat-marketplace-rkbk8\" (UID: \"95abd277-f40d-4636-8270-ff2346c0c30e\") " pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:13 crc kubenswrapper[4867]: I0214 04:45:13.214311 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx5v6\" (UniqueName: \"kubernetes.io/projected/95abd277-f40d-4636-8270-ff2346c0c30e-kube-api-access-xx5v6\") pod \"redhat-marketplace-rkbk8\" (UID: \"95abd277-f40d-4636-8270-ff2346c0c30e\") " pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:13 crc kubenswrapper[4867]: I0214 04:45:13.214488 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95abd277-f40d-4636-8270-ff2346c0c30e-utilities\") pod \"redhat-marketplace-rkbk8\" (UID: \"95abd277-f40d-4636-8270-ff2346c0c30e\") " pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:13 crc kubenswrapper[4867]: I0214 04:45:13.214920 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95abd277-f40d-4636-8270-ff2346c0c30e-utilities\") pod \"redhat-marketplace-rkbk8\" (UID: \"95abd277-f40d-4636-8270-ff2346c0c30e\") " pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:13 crc kubenswrapper[4867]: I0214 04:45:13.215154 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95abd277-f40d-4636-8270-ff2346c0c30e-catalog-content\") pod \"redhat-marketplace-rkbk8\" (UID: \"95abd277-f40d-4636-8270-ff2346c0c30e\") " pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:13 crc kubenswrapper[4867]: I0214 04:45:13.238543 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx5v6\" (UniqueName: \"kubernetes.io/projected/95abd277-f40d-4636-8270-ff2346c0c30e-kube-api-access-xx5v6\") pod \"redhat-marketplace-rkbk8\" (UID: \"95abd277-f40d-4636-8270-ff2346c0c30e\") " pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:13 crc kubenswrapper[4867]: I0214 04:45:13.322844 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:13 crc kubenswrapper[4867]: I0214 04:45:13.878048 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rkbk8"] Feb 14 04:45:14 crc kubenswrapper[4867]: I0214 04:45:14.278922 4867 generic.go:334] "Generic (PLEG): container finished" podID="95abd277-f40d-4636-8270-ff2346c0c30e" containerID="f7b29d61fb24ac793717ab513c38001c265a48bef742ed02acc7976e062136a6" exitCode=0 Feb 14 04:45:14 crc kubenswrapper[4867]: I0214 04:45:14.279220 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkbk8" event={"ID":"95abd277-f40d-4636-8270-ff2346c0c30e","Type":"ContainerDied","Data":"f7b29d61fb24ac793717ab513c38001c265a48bef742ed02acc7976e062136a6"} Feb 14 04:45:14 crc kubenswrapper[4867]: I0214 04:45:14.279249 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkbk8" event={"ID":"95abd277-f40d-4636-8270-ff2346c0c30e","Type":"ContainerStarted","Data":"7dba7fa7daa95862a94787ddc52839f0db353369b79c7f54925323d822855af2"} Feb 14 04:45:15 crc kubenswrapper[4867]: I0214 04:45:15.113046 4867 scope.go:117] "RemoveContainer" containerID="8824aa9f9bf0f294916520c801c31cbd1d85520f64360c54d9e396f8acec8e15" Feb 14 04:45:15 crc kubenswrapper[4867]: I0214 04:45:15.140706 4867 scope.go:117] "RemoveContainer" containerID="aa8fea275ce5bfacf3d08b45c45e75a0934c35dd23257fef4ead33c26bfccaa6" Feb 14 04:45:15 crc kubenswrapper[4867]: I0214 04:45:15.294147 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkbk8" event={"ID":"95abd277-f40d-4636-8270-ff2346c0c30e","Type":"ContainerStarted","Data":"e7f70ffebfcf510c6ee587e19b5dd98542b06b91636607b80a052f56be830f49"} Feb 14 04:45:16 crc kubenswrapper[4867]: I0214 04:45:16.307023 4867 generic.go:334] "Generic (PLEG): container finished" podID="95abd277-f40d-4636-8270-ff2346c0c30e" containerID="e7f70ffebfcf510c6ee587e19b5dd98542b06b91636607b80a052f56be830f49" exitCode=0 Feb 14 04:45:16 crc kubenswrapper[4867]: I0214 04:45:16.307199 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkbk8" event={"ID":"95abd277-f40d-4636-8270-ff2346c0c30e","Type":"ContainerDied","Data":"e7f70ffebfcf510c6ee587e19b5dd98542b06b91636607b80a052f56be830f49"} Feb 14 04:45:17 crc kubenswrapper[4867]: I0214 04:45:17.321731 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkbk8" event={"ID":"95abd277-f40d-4636-8270-ff2346c0c30e","Type":"ContainerStarted","Data":"60838b1a7eccd0bd11a68ff8a246e089d01be15b4c1189c4254eecebf47502eb"} Feb 14 04:45:17 crc kubenswrapper[4867]: I0214 04:45:17.357996 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rkbk8" podStartSLOduration=2.8607585110000002 podStartE2EDuration="5.357974079s" podCreationTimestamp="2026-02-14 04:45:12 +0000 UTC" firstStartedPulling="2026-02-14 04:45:14.28095991 +0000 UTC m=+2146.361897224" lastFinishedPulling="2026-02-14 04:45:16.778175478 +0000 UTC m=+2148.859112792" observedRunningTime="2026-02-14 04:45:17.349468555 +0000 UTC m=+2149.430405899" watchObservedRunningTime="2026-02-14 04:45:17.357974079 +0000 UTC m=+2149.438911393" Feb 14 04:45:23 crc kubenswrapper[4867]: I0214 04:45:23.323636 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:23 crc kubenswrapper[4867]: I0214 04:45:23.324300 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:23 crc kubenswrapper[4867]: I0214 04:45:23.377783 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:23 crc kubenswrapper[4867]: I0214 04:45:23.385807 4867 generic.go:334] "Generic (PLEG): container finished" podID="0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10" containerID="4cb72980b5b9bee8ac466efa1a7b02120564eee883847b51bd4f9469ad29807a" exitCode=0 Feb 14 04:45:23 crc kubenswrapper[4867]: I0214 04:45:23.387053 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" event={"ID":"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10","Type":"ContainerDied","Data":"4cb72980b5b9bee8ac466efa1a7b02120564eee883847b51bd4f9469ad29807a"} Feb 14 04:45:23 crc kubenswrapper[4867]: I0214 04:45:23.439741 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:23 crc kubenswrapper[4867]: I0214 04:45:23.617378 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rkbk8"] Feb 14 04:45:24 crc kubenswrapper[4867]: I0214 04:45:24.861415 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.041623 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-inventory\") pod \"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10\" (UID: \"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10\") " Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.041775 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nccx2\" (UniqueName: \"kubernetes.io/projected/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-kube-api-access-nccx2\") pod \"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10\" (UID: \"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10\") " Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.041948 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-ssh-key-openstack-edpm-ipam\") pod \"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10\" (UID: \"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10\") " Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.048288 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-kube-api-access-nccx2" (OuterVolumeSpecName: "kube-api-access-nccx2") pod "0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10" (UID: "0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10"). InnerVolumeSpecName "kube-api-access-nccx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.078804 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-inventory" (OuterVolumeSpecName: "inventory") pod "0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10" (UID: "0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.083615 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10" (UID: "0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.145925 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.145964 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.145980 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nccx2\" (UniqueName: \"kubernetes.io/projected/0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10-kube-api-access-nccx2\") on node \"crc\" DevicePath \"\"" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.409953 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" event={"ID":"0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10","Type":"ContainerDied","Data":"0ce552b1f72639eb3f74a2e6671f112bab5f516045d9b65f7a60c6a824ab8dac"} Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.409979 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-c22xw" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.410009 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ce552b1f72639eb3f74a2e6671f112bab5f516045d9b65f7a60c6a824ab8dac" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.410105 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rkbk8" podUID="95abd277-f40d-4636-8270-ff2346c0c30e" containerName="registry-server" containerID="cri-o://60838b1a7eccd0bd11a68ff8a246e089d01be15b4c1189c4254eecebf47502eb" gracePeriod=2 Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.512329 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr"] Feb 14 04:45:25 crc kubenswrapper[4867]: E0214 04:45:25.513103 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.513164 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.513472 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.514604 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.549901 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.550490 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.552367 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.553361 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr"] Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.553798 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.662844 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e04d43db-dfbf-41c6-8b73-48ff87baa800-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-78rwr\" (UID: \"e04d43db-dfbf-41c6-8b73-48ff87baa800\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.662938 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e04d43db-dfbf-41c6-8b73-48ff87baa800-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-78rwr\" (UID: \"e04d43db-dfbf-41c6-8b73-48ff87baa800\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.663404 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6nds\" (UniqueName: \"kubernetes.io/projected/e04d43db-dfbf-41c6-8b73-48ff87baa800-kube-api-access-z6nds\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-78rwr\" (UID: \"e04d43db-dfbf-41c6-8b73-48ff87baa800\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.765435 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e04d43db-dfbf-41c6-8b73-48ff87baa800-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-78rwr\" (UID: \"e04d43db-dfbf-41c6-8b73-48ff87baa800\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.765573 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6nds\" (UniqueName: \"kubernetes.io/projected/e04d43db-dfbf-41c6-8b73-48ff87baa800-kube-api-access-z6nds\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-78rwr\" (UID: \"e04d43db-dfbf-41c6-8b73-48ff87baa800\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.765686 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e04d43db-dfbf-41c6-8b73-48ff87baa800-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-78rwr\" (UID: \"e04d43db-dfbf-41c6-8b73-48ff87baa800\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.769701 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e04d43db-dfbf-41c6-8b73-48ff87baa800-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-78rwr\" (UID: \"e04d43db-dfbf-41c6-8b73-48ff87baa800\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.772229 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e04d43db-dfbf-41c6-8b73-48ff87baa800-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-78rwr\" (UID: \"e04d43db-dfbf-41c6-8b73-48ff87baa800\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.785643 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6nds\" (UniqueName: \"kubernetes.io/projected/e04d43db-dfbf-41c6-8b73-48ff87baa800-kube-api-access-z6nds\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-78rwr\" (UID: \"e04d43db-dfbf-41c6-8b73-48ff87baa800\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" Feb 14 04:45:25 crc kubenswrapper[4867]: I0214 04:45:25.969773 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" Feb 14 04:45:26 crc kubenswrapper[4867]: I0214 04:45:26.427025 4867 generic.go:334] "Generic (PLEG): container finished" podID="95abd277-f40d-4636-8270-ff2346c0c30e" containerID="60838b1a7eccd0bd11a68ff8a246e089d01be15b4c1189c4254eecebf47502eb" exitCode=0 Feb 14 04:45:26 crc kubenswrapper[4867]: I0214 04:45:26.427179 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkbk8" event={"ID":"95abd277-f40d-4636-8270-ff2346c0c30e","Type":"ContainerDied","Data":"60838b1a7eccd0bd11a68ff8a246e089d01be15b4c1189c4254eecebf47502eb"} Feb 14 04:45:26 crc kubenswrapper[4867]: I0214 04:45:26.524106 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:26 crc kubenswrapper[4867]: I0214 04:45:26.538895 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr"] Feb 14 04:45:26 crc kubenswrapper[4867]: I0214 04:45:26.690334 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx5v6\" (UniqueName: \"kubernetes.io/projected/95abd277-f40d-4636-8270-ff2346c0c30e-kube-api-access-xx5v6\") pod \"95abd277-f40d-4636-8270-ff2346c0c30e\" (UID: \"95abd277-f40d-4636-8270-ff2346c0c30e\") " Feb 14 04:45:26 crc kubenswrapper[4867]: I0214 04:45:26.690377 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95abd277-f40d-4636-8270-ff2346c0c30e-catalog-content\") pod \"95abd277-f40d-4636-8270-ff2346c0c30e\" (UID: \"95abd277-f40d-4636-8270-ff2346c0c30e\") " Feb 14 04:45:26 crc kubenswrapper[4867]: I0214 04:45:26.690586 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95abd277-f40d-4636-8270-ff2346c0c30e-utilities\") pod \"95abd277-f40d-4636-8270-ff2346c0c30e\" (UID: \"95abd277-f40d-4636-8270-ff2346c0c30e\") " Feb 14 04:45:26 crc kubenswrapper[4867]: I0214 04:45:26.691335 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95abd277-f40d-4636-8270-ff2346c0c30e-utilities" (OuterVolumeSpecName: "utilities") pod "95abd277-f40d-4636-8270-ff2346c0c30e" (UID: "95abd277-f40d-4636-8270-ff2346c0c30e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:45:26 crc kubenswrapper[4867]: I0214 04:45:26.695686 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95abd277-f40d-4636-8270-ff2346c0c30e-kube-api-access-xx5v6" (OuterVolumeSpecName: "kube-api-access-xx5v6") pod "95abd277-f40d-4636-8270-ff2346c0c30e" (UID: "95abd277-f40d-4636-8270-ff2346c0c30e"). InnerVolumeSpecName "kube-api-access-xx5v6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:45:26 crc kubenswrapper[4867]: I0214 04:45:26.720171 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95abd277-f40d-4636-8270-ff2346c0c30e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95abd277-f40d-4636-8270-ff2346c0c30e" (UID: "95abd277-f40d-4636-8270-ff2346c0c30e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:45:26 crc kubenswrapper[4867]: I0214 04:45:26.795017 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xx5v6\" (UniqueName: \"kubernetes.io/projected/95abd277-f40d-4636-8270-ff2346c0c30e-kube-api-access-xx5v6\") on node \"crc\" DevicePath \"\"" Feb 14 04:45:26 crc kubenswrapper[4867]: I0214 04:45:26.795576 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95abd277-f40d-4636-8270-ff2346c0c30e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:45:26 crc kubenswrapper[4867]: I0214 04:45:26.795665 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95abd277-f40d-4636-8270-ff2346c0c30e-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:45:27 crc kubenswrapper[4867]: I0214 04:45:27.440980 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" event={"ID":"e04d43db-dfbf-41c6-8b73-48ff87baa800","Type":"ContainerStarted","Data":"125fc3ab07da934876f9ac00cce7fe26fbc7c1cfcc5339611269a9b23363849c"} Feb 14 04:45:27 crc kubenswrapper[4867]: I0214 04:45:27.441624 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" event={"ID":"e04d43db-dfbf-41c6-8b73-48ff87baa800","Type":"ContainerStarted","Data":"750729294b3f87321e8b630da6705c327d5b33fdc3cbf1e2deddb61b89bb4759"} Feb 14 04:45:27 crc kubenswrapper[4867]: I0214 04:45:27.446862 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rkbk8" event={"ID":"95abd277-f40d-4636-8270-ff2346c0c30e","Type":"ContainerDied","Data":"7dba7fa7daa95862a94787ddc52839f0db353369b79c7f54925323d822855af2"} Feb 14 04:45:27 crc kubenswrapper[4867]: I0214 04:45:27.446917 4867 scope.go:117] "RemoveContainer" containerID="60838b1a7eccd0bd11a68ff8a246e089d01be15b4c1189c4254eecebf47502eb" Feb 14 04:45:27 crc kubenswrapper[4867]: I0214 04:45:27.447052 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rkbk8" Feb 14 04:45:27 crc kubenswrapper[4867]: I0214 04:45:27.471255 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" podStartSLOduration=2.097174937 podStartE2EDuration="2.471227112s" podCreationTimestamp="2026-02-14 04:45:25 +0000 UTC" firstStartedPulling="2026-02-14 04:45:26.539943341 +0000 UTC m=+2158.620880655" lastFinishedPulling="2026-02-14 04:45:26.913995516 +0000 UTC m=+2158.994932830" observedRunningTime="2026-02-14 04:45:27.465968084 +0000 UTC m=+2159.546905408" watchObservedRunningTime="2026-02-14 04:45:27.471227112 +0000 UTC m=+2159.552164436" Feb 14 04:45:27 crc kubenswrapper[4867]: I0214 04:45:27.489768 4867 scope.go:117] "RemoveContainer" containerID="e7f70ffebfcf510c6ee587e19b5dd98542b06b91636607b80a052f56be830f49" Feb 14 04:45:27 crc kubenswrapper[4867]: I0214 04:45:27.508558 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rkbk8"] Feb 14 04:45:27 crc kubenswrapper[4867]: I0214 04:45:27.512180 4867 scope.go:117] "RemoveContainer" containerID="f7b29d61fb24ac793717ab513c38001c265a48bef742ed02acc7976e062136a6" Feb 14 04:45:27 crc kubenswrapper[4867]: I0214 04:45:27.525475 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rkbk8"] Feb 14 04:45:29 crc kubenswrapper[4867]: I0214 04:45:29.015900 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95abd277-f40d-4636-8270-ff2346c0c30e" path="/var/lib/kubelet/pods/95abd277-f40d-4636-8270-ff2346c0c30e/volumes" Feb 14 04:46:12 crc kubenswrapper[4867]: I0214 04:46:12.913409 4867 generic.go:334] "Generic (PLEG): container finished" podID="e04d43db-dfbf-41c6-8b73-48ff87baa800" containerID="125fc3ab07da934876f9ac00cce7fe26fbc7c1cfcc5339611269a9b23363849c" exitCode=0 Feb 14 04:46:12 crc kubenswrapper[4867]: I0214 04:46:12.913475 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" event={"ID":"e04d43db-dfbf-41c6-8b73-48ff87baa800","Type":"ContainerDied","Data":"125fc3ab07da934876f9ac00cce7fe26fbc7c1cfcc5339611269a9b23363849c"} Feb 14 04:46:14 crc kubenswrapper[4867]: I0214 04:46:14.426005 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" Feb 14 04:46:14 crc kubenswrapper[4867]: I0214 04:46:14.533142 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6nds\" (UniqueName: \"kubernetes.io/projected/e04d43db-dfbf-41c6-8b73-48ff87baa800-kube-api-access-z6nds\") pod \"e04d43db-dfbf-41c6-8b73-48ff87baa800\" (UID: \"e04d43db-dfbf-41c6-8b73-48ff87baa800\") " Feb 14 04:46:14 crc kubenswrapper[4867]: I0214 04:46:14.533556 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e04d43db-dfbf-41c6-8b73-48ff87baa800-inventory\") pod \"e04d43db-dfbf-41c6-8b73-48ff87baa800\" (UID: \"e04d43db-dfbf-41c6-8b73-48ff87baa800\") " Feb 14 04:46:14 crc kubenswrapper[4867]: I0214 04:46:14.533842 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e04d43db-dfbf-41c6-8b73-48ff87baa800-ssh-key-openstack-edpm-ipam\") pod \"e04d43db-dfbf-41c6-8b73-48ff87baa800\" (UID: \"e04d43db-dfbf-41c6-8b73-48ff87baa800\") " Feb 14 04:46:14 crc kubenswrapper[4867]: I0214 04:46:14.539790 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e04d43db-dfbf-41c6-8b73-48ff87baa800-kube-api-access-z6nds" (OuterVolumeSpecName: "kube-api-access-z6nds") pod "e04d43db-dfbf-41c6-8b73-48ff87baa800" (UID: "e04d43db-dfbf-41c6-8b73-48ff87baa800"). InnerVolumeSpecName "kube-api-access-z6nds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:46:14 crc kubenswrapper[4867]: I0214 04:46:14.571086 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e04d43db-dfbf-41c6-8b73-48ff87baa800-inventory" (OuterVolumeSpecName: "inventory") pod "e04d43db-dfbf-41c6-8b73-48ff87baa800" (UID: "e04d43db-dfbf-41c6-8b73-48ff87baa800"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:46:14 crc kubenswrapper[4867]: I0214 04:46:14.572329 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e04d43db-dfbf-41c6-8b73-48ff87baa800-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e04d43db-dfbf-41c6-8b73-48ff87baa800" (UID: "e04d43db-dfbf-41c6-8b73-48ff87baa800"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:46:14 crc kubenswrapper[4867]: I0214 04:46:14.637693 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e04d43db-dfbf-41c6-8b73-48ff87baa800-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 04:46:14 crc kubenswrapper[4867]: I0214 04:46:14.637733 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e04d43db-dfbf-41c6-8b73-48ff87baa800-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:46:14 crc kubenswrapper[4867]: I0214 04:46:14.637744 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6nds\" (UniqueName: \"kubernetes.io/projected/e04d43db-dfbf-41c6-8b73-48ff87baa800-kube-api-access-z6nds\") on node \"crc\" DevicePath \"\"" Feb 14 04:46:14 crc kubenswrapper[4867]: I0214 04:46:14.935780 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" event={"ID":"e04d43db-dfbf-41c6-8b73-48ff87baa800","Type":"ContainerDied","Data":"750729294b3f87321e8b630da6705c327d5b33fdc3cbf1e2deddb61b89bb4759"} Feb 14 04:46:14 crc kubenswrapper[4867]: I0214 04:46:14.936271 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="750729294b3f87321e8b630da6705c327d5b33fdc3cbf1e2deddb61b89bb4759" Feb 14 04:46:14 crc kubenswrapper[4867]: I0214 04:46:14.935854 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-78rwr" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.050741 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-5rl49"] Feb 14 04:46:15 crc kubenswrapper[4867]: E0214 04:46:15.051369 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95abd277-f40d-4636-8270-ff2346c0c30e" containerName="registry-server" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.051393 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="95abd277-f40d-4636-8270-ff2346c0c30e" containerName="registry-server" Feb 14 04:46:15 crc kubenswrapper[4867]: E0214 04:46:15.051421 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95abd277-f40d-4636-8270-ff2346c0c30e" containerName="extract-utilities" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.051430 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="95abd277-f40d-4636-8270-ff2346c0c30e" containerName="extract-utilities" Feb 14 04:46:15 crc kubenswrapper[4867]: E0214 04:46:15.051502 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e04d43db-dfbf-41c6-8b73-48ff87baa800" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.051615 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e04d43db-dfbf-41c6-8b73-48ff87baa800" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 14 04:46:15 crc kubenswrapper[4867]: E0214 04:46:15.051638 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95abd277-f40d-4636-8270-ff2346c0c30e" containerName="extract-content" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.051647 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="95abd277-f40d-4636-8270-ff2346c0c30e" containerName="extract-content" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.051925 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="95abd277-f40d-4636-8270-ff2346c0c30e" containerName="registry-server" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.051952 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e04d43db-dfbf-41c6-8b73-48ff87baa800" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.053038 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.061330 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-5rl49"] Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.061997 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.062126 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.062575 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.062770 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.152491 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czk66\" (UniqueName: \"kubernetes.io/projected/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-kube-api-access-czk66\") pod \"ssh-known-hosts-edpm-deployment-5rl49\" (UID: \"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7\") " pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.152780 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-5rl49\" (UID: \"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7\") " pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.153967 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-5rl49\" (UID: \"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7\") " pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.257282 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-5rl49\" (UID: \"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7\") " pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.257440 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czk66\" (UniqueName: \"kubernetes.io/projected/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-kube-api-access-czk66\") pod \"ssh-known-hosts-edpm-deployment-5rl49\" (UID: \"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7\") " pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.257518 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-5rl49\" (UID: \"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7\") " pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.263139 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-5rl49\" (UID: \"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7\") " pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.268911 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-5rl49\" (UID: \"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7\") " pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.280774 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czk66\" (UniqueName: \"kubernetes.io/projected/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-kube-api-access-czk66\") pod \"ssh-known-hosts-edpm-deployment-5rl49\" (UID: \"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7\") " pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.388267 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" Feb 14 04:46:15 crc kubenswrapper[4867]: I0214 04:46:15.986116 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-5rl49"] Feb 14 04:46:16 crc kubenswrapper[4867]: I0214 04:46:16.965879 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" event={"ID":"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7","Type":"ContainerStarted","Data":"15acb3b34153ca1356737c299c7c242dde0cbf2dfec6a09e182e67becd2cf5ea"} Feb 14 04:46:16 crc kubenswrapper[4867]: I0214 04:46:16.966495 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" event={"ID":"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7","Type":"ContainerStarted","Data":"eb32a6602bc91b2bbe627a287febff7d49e7c842a26964d9336fa01d5b7c94b5"} Feb 14 04:46:16 crc kubenswrapper[4867]: I0214 04:46:16.990778 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" podStartSLOduration=1.5338248540000001 podStartE2EDuration="1.990752341s" podCreationTimestamp="2026-02-14 04:46:15 +0000 UTC" firstStartedPulling="2026-02-14 04:46:15.993449511 +0000 UTC m=+2208.074386825" lastFinishedPulling="2026-02-14 04:46:16.450376998 +0000 UTC m=+2208.531314312" observedRunningTime="2026-02-14 04:46:16.983793928 +0000 UTC m=+2209.064731262" watchObservedRunningTime="2026-02-14 04:46:16.990752341 +0000 UTC m=+2209.071689655" Feb 14 04:46:24 crc kubenswrapper[4867]: I0214 04:46:24.048693 4867 generic.go:334] "Generic (PLEG): container finished" podID="e72df4ca-d603-4f2e-9ff1-3ec392ef11b7" containerID="15acb3b34153ca1356737c299c7c242dde0cbf2dfec6a09e182e67becd2cf5ea" exitCode=0 Feb 14 04:46:24 crc kubenswrapper[4867]: I0214 04:46:24.048772 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" event={"ID":"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7","Type":"ContainerDied","Data":"15acb3b34153ca1356737c299c7c242dde0cbf2dfec6a09e182e67becd2cf5ea"} Feb 14 04:46:25 crc kubenswrapper[4867]: I0214 04:46:25.620727 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" Feb 14 04:46:25 crc kubenswrapper[4867]: I0214 04:46:25.666008 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-inventory-0\") pod \"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7\" (UID: \"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7\") " Feb 14 04:46:25 crc kubenswrapper[4867]: I0214 04:46:25.666465 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-ssh-key-openstack-edpm-ipam\") pod \"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7\" (UID: \"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7\") " Feb 14 04:46:25 crc kubenswrapper[4867]: I0214 04:46:25.666632 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czk66\" (UniqueName: \"kubernetes.io/projected/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-kube-api-access-czk66\") pod \"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7\" (UID: \"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7\") " Feb 14 04:46:25 crc kubenswrapper[4867]: I0214 04:46:25.672158 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-kube-api-access-czk66" (OuterVolumeSpecName: "kube-api-access-czk66") pod "e72df4ca-d603-4f2e-9ff1-3ec392ef11b7" (UID: "e72df4ca-d603-4f2e-9ff1-3ec392ef11b7"). InnerVolumeSpecName "kube-api-access-czk66". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:46:25 crc kubenswrapper[4867]: I0214 04:46:25.706645 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "e72df4ca-d603-4f2e-9ff1-3ec392ef11b7" (UID: "e72df4ca-d603-4f2e-9ff1-3ec392ef11b7"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:46:25 crc kubenswrapper[4867]: I0214 04:46:25.712446 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e72df4ca-d603-4f2e-9ff1-3ec392ef11b7" (UID: "e72df4ca-d603-4f2e-9ff1-3ec392ef11b7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:46:25 crc kubenswrapper[4867]: I0214 04:46:25.769831 4867 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:46:25 crc kubenswrapper[4867]: I0214 04:46:25.769886 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:46:25 crc kubenswrapper[4867]: I0214 04:46:25.769902 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czk66\" (UniqueName: \"kubernetes.io/projected/e72df4ca-d603-4f2e-9ff1-3ec392ef11b7-kube-api-access-czk66\") on node \"crc\" DevicePath \"\"" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.069589 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" event={"ID":"e72df4ca-d603-4f2e-9ff1-3ec392ef11b7","Type":"ContainerDied","Data":"eb32a6602bc91b2bbe627a287febff7d49e7c842a26964d9336fa01d5b7c94b5"} Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.069631 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb32a6602bc91b2bbe627a287febff7d49e7c842a26964d9336fa01d5b7c94b5" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.069653 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5rl49" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.151598 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48"] Feb 14 04:46:26 crc kubenswrapper[4867]: E0214 04:46:26.152059 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e72df4ca-d603-4f2e-9ff1-3ec392ef11b7" containerName="ssh-known-hosts-edpm-deployment" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.152078 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="e72df4ca-d603-4f2e-9ff1-3ec392ef11b7" containerName="ssh-known-hosts-edpm-deployment" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.152297 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="e72df4ca-d603-4f2e-9ff1-3ec392ef11b7" containerName="ssh-known-hosts-edpm-deployment" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.153177 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.164171 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48"] Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.192409 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.192617 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.192729 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.192830 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.195682 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/764366f2-ea14-4cc9-a195-52ee347e666d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lsj48\" (UID: \"764366f2-ea14-4cc9-a195-52ee347e666d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.195732 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz2xw\" (UniqueName: \"kubernetes.io/projected/764366f2-ea14-4cc9-a195-52ee347e666d-kube-api-access-sz2xw\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lsj48\" (UID: \"764366f2-ea14-4cc9-a195-52ee347e666d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.195899 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/764366f2-ea14-4cc9-a195-52ee347e666d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lsj48\" (UID: \"764366f2-ea14-4cc9-a195-52ee347e666d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.298021 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/764366f2-ea14-4cc9-a195-52ee347e666d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lsj48\" (UID: \"764366f2-ea14-4cc9-a195-52ee347e666d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.298071 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz2xw\" (UniqueName: \"kubernetes.io/projected/764366f2-ea14-4cc9-a195-52ee347e666d-kube-api-access-sz2xw\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lsj48\" (UID: \"764366f2-ea14-4cc9-a195-52ee347e666d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.298171 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/764366f2-ea14-4cc9-a195-52ee347e666d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lsj48\" (UID: \"764366f2-ea14-4cc9-a195-52ee347e666d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.302046 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/764366f2-ea14-4cc9-a195-52ee347e666d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lsj48\" (UID: \"764366f2-ea14-4cc9-a195-52ee347e666d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.306195 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/764366f2-ea14-4cc9-a195-52ee347e666d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lsj48\" (UID: \"764366f2-ea14-4cc9-a195-52ee347e666d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.313724 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz2xw\" (UniqueName: \"kubernetes.io/projected/764366f2-ea14-4cc9-a195-52ee347e666d-kube-api-access-sz2xw\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lsj48\" (UID: \"764366f2-ea14-4cc9-a195-52ee347e666d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" Feb 14 04:46:26 crc kubenswrapper[4867]: I0214 04:46:26.510144 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" Feb 14 04:46:27 crc kubenswrapper[4867]: I0214 04:46:27.071408 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48"] Feb 14 04:46:28 crc kubenswrapper[4867]: I0214 04:46:28.105703 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" event={"ID":"764366f2-ea14-4cc9-a195-52ee347e666d","Type":"ContainerStarted","Data":"ff04b8b79a32a8da5015e7154d7228eafe8b6b301c3ec642cfae44e02e65557e"} Feb 14 04:46:28 crc kubenswrapper[4867]: I0214 04:46:28.106244 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" event={"ID":"764366f2-ea14-4cc9-a195-52ee347e666d","Type":"ContainerStarted","Data":"3ad83775e7c29964628420a0feab56890f6cede5166acecef04f67f27b2815da"} Feb 14 04:46:28 crc kubenswrapper[4867]: I0214 04:46:28.137076 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" podStartSLOduration=1.561428933 podStartE2EDuration="2.137046704s" podCreationTimestamp="2026-02-14 04:46:26 +0000 UTC" firstStartedPulling="2026-02-14 04:46:27.075146564 +0000 UTC m=+2219.156083878" lastFinishedPulling="2026-02-14 04:46:27.650764335 +0000 UTC m=+2219.731701649" observedRunningTime="2026-02-14 04:46:28.124981197 +0000 UTC m=+2220.205918511" watchObservedRunningTime="2026-02-14 04:46:28.137046704 +0000 UTC m=+2220.217984018" Feb 14 04:46:31 crc kubenswrapper[4867]: I0214 04:46:31.251544 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:46:31 crc kubenswrapper[4867]: I0214 04:46:31.252118 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:46:35 crc kubenswrapper[4867]: I0214 04:46:35.403362 4867 generic.go:334] "Generic (PLEG): container finished" podID="764366f2-ea14-4cc9-a195-52ee347e666d" containerID="ff04b8b79a32a8da5015e7154d7228eafe8b6b301c3ec642cfae44e02e65557e" exitCode=0 Feb 14 04:46:35 crc kubenswrapper[4867]: I0214 04:46:35.403455 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" event={"ID":"764366f2-ea14-4cc9-a195-52ee347e666d","Type":"ContainerDied","Data":"ff04b8b79a32a8da5015e7154d7228eafe8b6b301c3ec642cfae44e02e65557e"} Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.026177 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.124531 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/764366f2-ea14-4cc9-a195-52ee347e666d-inventory\") pod \"764366f2-ea14-4cc9-a195-52ee347e666d\" (UID: \"764366f2-ea14-4cc9-a195-52ee347e666d\") " Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.124658 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/764366f2-ea14-4cc9-a195-52ee347e666d-ssh-key-openstack-edpm-ipam\") pod \"764366f2-ea14-4cc9-a195-52ee347e666d\" (UID: \"764366f2-ea14-4cc9-a195-52ee347e666d\") " Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.124721 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz2xw\" (UniqueName: \"kubernetes.io/projected/764366f2-ea14-4cc9-a195-52ee347e666d-kube-api-access-sz2xw\") pod \"764366f2-ea14-4cc9-a195-52ee347e666d\" (UID: \"764366f2-ea14-4cc9-a195-52ee347e666d\") " Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.130530 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/764366f2-ea14-4cc9-a195-52ee347e666d-kube-api-access-sz2xw" (OuterVolumeSpecName: "kube-api-access-sz2xw") pod "764366f2-ea14-4cc9-a195-52ee347e666d" (UID: "764366f2-ea14-4cc9-a195-52ee347e666d"). InnerVolumeSpecName "kube-api-access-sz2xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.160271 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/764366f2-ea14-4cc9-a195-52ee347e666d-inventory" (OuterVolumeSpecName: "inventory") pod "764366f2-ea14-4cc9-a195-52ee347e666d" (UID: "764366f2-ea14-4cc9-a195-52ee347e666d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.160575 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/764366f2-ea14-4cc9-a195-52ee347e666d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "764366f2-ea14-4cc9-a195-52ee347e666d" (UID: "764366f2-ea14-4cc9-a195-52ee347e666d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.228397 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/764366f2-ea14-4cc9-a195-52ee347e666d-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.228455 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/764366f2-ea14-4cc9-a195-52ee347e666d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.228475 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz2xw\" (UniqueName: \"kubernetes.io/projected/764366f2-ea14-4cc9-a195-52ee347e666d-kube-api-access-sz2xw\") on node \"crc\" DevicePath \"\"" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.426459 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" event={"ID":"764366f2-ea14-4cc9-a195-52ee347e666d","Type":"ContainerDied","Data":"3ad83775e7c29964628420a0feab56890f6cede5166acecef04f67f27b2815da"} Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.426520 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lsj48" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.426525 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ad83775e7c29964628420a0feab56890f6cede5166acecef04f67f27b2815da" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.520752 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml"] Feb 14 04:46:37 crc kubenswrapper[4867]: E0214 04:46:37.521720 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="764366f2-ea14-4cc9-a195-52ee347e666d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.521763 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="764366f2-ea14-4cc9-a195-52ee347e666d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.522042 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="764366f2-ea14-4cc9-a195-52ee347e666d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.523169 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.525790 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.527222 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.527622 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.529455 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.538383 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml"] Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.637737 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a0a98e3-261b-460d-92c2-4fce312f5171-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml\" (UID: \"4a0a98e3-261b-460d-92c2-4fce312f5171\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.638476 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btqxp\" (UniqueName: \"kubernetes.io/projected/4a0a98e3-261b-460d-92c2-4fce312f5171-kube-api-access-btqxp\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml\" (UID: \"4a0a98e3-261b-460d-92c2-4fce312f5171\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.638621 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a0a98e3-261b-460d-92c2-4fce312f5171-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml\" (UID: \"4a0a98e3-261b-460d-92c2-4fce312f5171\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.741284 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btqxp\" (UniqueName: \"kubernetes.io/projected/4a0a98e3-261b-460d-92c2-4fce312f5171-kube-api-access-btqxp\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml\" (UID: \"4a0a98e3-261b-460d-92c2-4fce312f5171\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.741364 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a0a98e3-261b-460d-92c2-4fce312f5171-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml\" (UID: \"4a0a98e3-261b-460d-92c2-4fce312f5171\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.741530 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a0a98e3-261b-460d-92c2-4fce312f5171-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml\" (UID: \"4a0a98e3-261b-460d-92c2-4fce312f5171\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.747055 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a0a98e3-261b-460d-92c2-4fce312f5171-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml\" (UID: \"4a0a98e3-261b-460d-92c2-4fce312f5171\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.747647 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a0a98e3-261b-460d-92c2-4fce312f5171-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml\" (UID: \"4a0a98e3-261b-460d-92c2-4fce312f5171\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.759097 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btqxp\" (UniqueName: \"kubernetes.io/projected/4a0a98e3-261b-460d-92c2-4fce312f5171-kube-api-access-btqxp\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml\" (UID: \"4a0a98e3-261b-460d-92c2-4fce312f5171\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" Feb 14 04:46:37 crc kubenswrapper[4867]: I0214 04:46:37.849912 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" Feb 14 04:46:38 crc kubenswrapper[4867]: I0214 04:46:38.504456 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml"] Feb 14 04:46:38 crc kubenswrapper[4867]: I0214 04:46:38.517230 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 04:46:39 crc kubenswrapper[4867]: I0214 04:46:39.445406 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" event={"ID":"4a0a98e3-261b-460d-92c2-4fce312f5171","Type":"ContainerStarted","Data":"7877765a53214b63333058663a364c04a85140165441843e76a1cd10c91089b6"} Feb 14 04:46:39 crc kubenswrapper[4867]: I0214 04:46:39.446027 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" event={"ID":"4a0a98e3-261b-460d-92c2-4fce312f5171","Type":"ContainerStarted","Data":"ba534784dec38ffd594dff4d1903997e1574109c7949bf293e757c30c148d410"} Feb 14 04:46:39 crc kubenswrapper[4867]: I0214 04:46:39.466313 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" podStartSLOduration=2.085456559 podStartE2EDuration="2.466291203s" podCreationTimestamp="2026-02-14 04:46:37 +0000 UTC" firstStartedPulling="2026-02-14 04:46:38.516857633 +0000 UTC m=+2230.597794967" lastFinishedPulling="2026-02-14 04:46:38.897692297 +0000 UTC m=+2230.978629611" observedRunningTime="2026-02-14 04:46:39.459119714 +0000 UTC m=+2231.540057028" watchObservedRunningTime="2026-02-14 04:46:39.466291203 +0000 UTC m=+2231.547228517" Feb 14 04:46:48 crc kubenswrapper[4867]: I0214 04:46:48.555854 4867 generic.go:334] "Generic (PLEG): container finished" podID="4a0a98e3-261b-460d-92c2-4fce312f5171" containerID="7877765a53214b63333058663a364c04a85140165441843e76a1cd10c91089b6" exitCode=0 Feb 14 04:46:48 crc kubenswrapper[4867]: I0214 04:46:48.556145 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" event={"ID":"4a0a98e3-261b-460d-92c2-4fce312f5171","Type":"ContainerDied","Data":"7877765a53214b63333058663a364c04a85140165441843e76a1cd10c91089b6"} Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.102311 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.269011 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btqxp\" (UniqueName: \"kubernetes.io/projected/4a0a98e3-261b-460d-92c2-4fce312f5171-kube-api-access-btqxp\") pod \"4a0a98e3-261b-460d-92c2-4fce312f5171\" (UID: \"4a0a98e3-261b-460d-92c2-4fce312f5171\") " Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.269293 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a0a98e3-261b-460d-92c2-4fce312f5171-ssh-key-openstack-edpm-ipam\") pod \"4a0a98e3-261b-460d-92c2-4fce312f5171\" (UID: \"4a0a98e3-261b-460d-92c2-4fce312f5171\") " Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.269356 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a0a98e3-261b-460d-92c2-4fce312f5171-inventory\") pod \"4a0a98e3-261b-460d-92c2-4fce312f5171\" (UID: \"4a0a98e3-261b-460d-92c2-4fce312f5171\") " Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.280468 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a0a98e3-261b-460d-92c2-4fce312f5171-kube-api-access-btqxp" (OuterVolumeSpecName: "kube-api-access-btqxp") pod "4a0a98e3-261b-460d-92c2-4fce312f5171" (UID: "4a0a98e3-261b-460d-92c2-4fce312f5171"). InnerVolumeSpecName "kube-api-access-btqxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.303853 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a0a98e3-261b-460d-92c2-4fce312f5171-inventory" (OuterVolumeSpecName: "inventory") pod "4a0a98e3-261b-460d-92c2-4fce312f5171" (UID: "4a0a98e3-261b-460d-92c2-4fce312f5171"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.326526 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a0a98e3-261b-460d-92c2-4fce312f5171-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4a0a98e3-261b-460d-92c2-4fce312f5171" (UID: "4a0a98e3-261b-460d-92c2-4fce312f5171"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.372411 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btqxp\" (UniqueName: \"kubernetes.io/projected/4a0a98e3-261b-460d-92c2-4fce312f5171-kube-api-access-btqxp\") on node \"crc\" DevicePath \"\"" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.372446 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a0a98e3-261b-460d-92c2-4fce312f5171-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.372458 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a0a98e3-261b-460d-92c2-4fce312f5171-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.602303 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" event={"ID":"4a0a98e3-261b-460d-92c2-4fce312f5171","Type":"ContainerDied","Data":"ba534784dec38ffd594dff4d1903997e1574109c7949bf293e757c30c148d410"} Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.602351 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba534784dec38ffd594dff4d1903997e1574109c7949bf293e757c30c148d410" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.602413 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.726821 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9"] Feb 14 04:46:50 crc kubenswrapper[4867]: E0214 04:46:50.727355 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a0a98e3-261b-460d-92c2-4fce312f5171" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.727377 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a0a98e3-261b-460d-92c2-4fce312f5171" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.727636 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a0a98e3-261b-460d-92c2-4fce312f5171" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.728574 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.732624 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.735352 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.735644 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.735707 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.735750 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.735978 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.736153 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.736317 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.736593 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.761137 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9"] Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.883710 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.883773 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.883825 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.883930 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.883998 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.884071 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.884142 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.884174 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7lqb\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-kube-api-access-j7lqb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.884265 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.884388 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.884449 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.884481 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.884597 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.884684 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.884798 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.885270 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.987529 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.987601 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.987654 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.987737 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.987770 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.987806 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.987852 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.987883 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.987927 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.988008 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.988268 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.988317 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7lqb\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-kube-api-access-j7lqb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.988377 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.988407 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.988439 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.988462 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.992430 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.994009 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.994148 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.994898 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.995026 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.995568 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.996711 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:50 crc kubenswrapper[4867]: I0214 04:46:50.996887 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:51 crc kubenswrapper[4867]: I0214 04:46:51.000055 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:51 crc kubenswrapper[4867]: I0214 04:46:51.000294 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:51 crc kubenswrapper[4867]: I0214 04:46:51.000348 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:51 crc kubenswrapper[4867]: I0214 04:46:51.000728 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:51 crc kubenswrapper[4867]: I0214 04:46:51.007299 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:51 crc kubenswrapper[4867]: I0214 04:46:51.012363 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:51 crc kubenswrapper[4867]: I0214 04:46:51.016651 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:51 crc kubenswrapper[4867]: I0214 04:46:51.016652 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7lqb\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-kube-api-access-j7lqb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:51 crc kubenswrapper[4867]: I0214 04:46:51.057363 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:46:51 crc kubenswrapper[4867]: I0214 04:46:51.658959 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9"] Feb 14 04:46:52 crc kubenswrapper[4867]: I0214 04:46:52.628044 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" event={"ID":"01cb12dd-9d34-4898-941a-05635d21630f","Type":"ContainerStarted","Data":"eb50eb14eba880c0f518af2dcfcdf4cf46735bb1f20af3d0acff7d38753ef4e0"} Feb 14 04:46:52 crc kubenswrapper[4867]: I0214 04:46:52.628888 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" event={"ID":"01cb12dd-9d34-4898-941a-05635d21630f","Type":"ContainerStarted","Data":"711f4fe27cebbb2e6c84267ccd7dca6591c48e5cf8880040abe090f7f6d2f6eb"} Feb 14 04:46:52 crc kubenswrapper[4867]: I0214 04:46:52.659165 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" podStartSLOduration=2.275855894 podStartE2EDuration="2.659139622s" podCreationTimestamp="2026-02-14 04:46:50 +0000 UTC" firstStartedPulling="2026-02-14 04:46:51.662562242 +0000 UTC m=+2243.743499556" lastFinishedPulling="2026-02-14 04:46:52.04584597 +0000 UTC m=+2244.126783284" observedRunningTime="2026-02-14 04:46:52.651923302 +0000 UTC m=+2244.732860616" watchObservedRunningTime="2026-02-14 04:46:52.659139622 +0000 UTC m=+2244.740076936" Feb 14 04:47:00 crc kubenswrapper[4867]: I0214 04:47:00.047471 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-l8hr2"] Feb 14 04:47:00 crc kubenswrapper[4867]: I0214 04:47:00.056691 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-l8hr2"] Feb 14 04:47:00 crc kubenswrapper[4867]: I0214 04:47:00.727330 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hthk2"] Feb 14 04:47:00 crc kubenswrapper[4867]: I0214 04:47:00.730939 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:00 crc kubenswrapper[4867]: I0214 04:47:00.741042 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hthk2"] Feb 14 04:47:00 crc kubenswrapper[4867]: I0214 04:47:00.903077 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/709ab839-d449-4265-b59d-192b93a2039a-catalog-content\") pod \"certified-operators-hthk2\" (UID: \"709ab839-d449-4265-b59d-192b93a2039a\") " pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:00 crc kubenswrapper[4867]: I0214 04:47:00.903234 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/709ab839-d449-4265-b59d-192b93a2039a-utilities\") pod \"certified-operators-hthk2\" (UID: \"709ab839-d449-4265-b59d-192b93a2039a\") " pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:00 crc kubenswrapper[4867]: I0214 04:47:00.903272 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hv7q\" (UniqueName: \"kubernetes.io/projected/709ab839-d449-4265-b59d-192b93a2039a-kube-api-access-9hv7q\") pod \"certified-operators-hthk2\" (UID: \"709ab839-d449-4265-b59d-192b93a2039a\") " pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:01 crc kubenswrapper[4867]: I0214 04:47:01.005615 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/709ab839-d449-4265-b59d-192b93a2039a-catalog-content\") pod \"certified-operators-hthk2\" (UID: \"709ab839-d449-4265-b59d-192b93a2039a\") " pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:01 crc kubenswrapper[4867]: I0214 04:47:01.005746 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/709ab839-d449-4265-b59d-192b93a2039a-utilities\") pod \"certified-operators-hthk2\" (UID: \"709ab839-d449-4265-b59d-192b93a2039a\") " pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:01 crc kubenswrapper[4867]: I0214 04:47:01.005779 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hv7q\" (UniqueName: \"kubernetes.io/projected/709ab839-d449-4265-b59d-192b93a2039a-kube-api-access-9hv7q\") pod \"certified-operators-hthk2\" (UID: \"709ab839-d449-4265-b59d-192b93a2039a\") " pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:01 crc kubenswrapper[4867]: I0214 04:47:01.006821 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/709ab839-d449-4265-b59d-192b93a2039a-catalog-content\") pod \"certified-operators-hthk2\" (UID: \"709ab839-d449-4265-b59d-192b93a2039a\") " pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:01 crc kubenswrapper[4867]: I0214 04:47:01.007154 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/709ab839-d449-4265-b59d-192b93a2039a-utilities\") pod \"certified-operators-hthk2\" (UID: \"709ab839-d449-4265-b59d-192b93a2039a\") " pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:01 crc kubenswrapper[4867]: I0214 04:47:01.013646 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="632c48c8-f0d5-4dc9-823e-fa96b9265e97" path="/var/lib/kubelet/pods/632c48c8-f0d5-4dc9-823e-fa96b9265e97/volumes" Feb 14 04:47:01 crc kubenswrapper[4867]: I0214 04:47:01.034080 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hv7q\" (UniqueName: \"kubernetes.io/projected/709ab839-d449-4265-b59d-192b93a2039a-kube-api-access-9hv7q\") pod \"certified-operators-hthk2\" (UID: \"709ab839-d449-4265-b59d-192b93a2039a\") " pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:01 crc kubenswrapper[4867]: I0214 04:47:01.082745 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:01 crc kubenswrapper[4867]: I0214 04:47:01.251908 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:47:01 crc kubenswrapper[4867]: I0214 04:47:01.252266 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:47:01 crc kubenswrapper[4867]: I0214 04:47:01.783873 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hthk2"] Feb 14 04:47:01 crc kubenswrapper[4867]: I0214 04:47:01.832311 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hthk2" event={"ID":"709ab839-d449-4265-b59d-192b93a2039a","Type":"ContainerStarted","Data":"79d7515378363ed08b88377b44a53b803ef90ae278e3a0dcb05f423c876bc5f3"} Feb 14 04:47:02 crc kubenswrapper[4867]: I0214 04:47:02.899681 4867 generic.go:334] "Generic (PLEG): container finished" podID="709ab839-d449-4265-b59d-192b93a2039a" containerID="c21c0c4fbc22f9cbe9392d488a21774797ea11e5926859a442df01ad36339416" exitCode=0 Feb 14 04:47:02 crc kubenswrapper[4867]: I0214 04:47:02.900195 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hthk2" event={"ID":"709ab839-d449-4265-b59d-192b93a2039a","Type":"ContainerDied","Data":"c21c0c4fbc22f9cbe9392d488a21774797ea11e5926859a442df01ad36339416"} Feb 14 04:47:04 crc kubenswrapper[4867]: I0214 04:47:04.931080 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hthk2" event={"ID":"709ab839-d449-4265-b59d-192b93a2039a","Type":"ContainerStarted","Data":"5847f981bdfeee05bd39dc4e5dfc6eb0764d7c1bb29f2a7b3006a95305dccd2b"} Feb 14 04:47:06 crc kubenswrapper[4867]: I0214 04:47:06.955868 4867 generic.go:334] "Generic (PLEG): container finished" podID="709ab839-d449-4265-b59d-192b93a2039a" containerID="5847f981bdfeee05bd39dc4e5dfc6eb0764d7c1bb29f2a7b3006a95305dccd2b" exitCode=0 Feb 14 04:47:06 crc kubenswrapper[4867]: I0214 04:47:06.955957 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hthk2" event={"ID":"709ab839-d449-4265-b59d-192b93a2039a","Type":"ContainerDied","Data":"5847f981bdfeee05bd39dc4e5dfc6eb0764d7c1bb29f2a7b3006a95305dccd2b"} Feb 14 04:47:07 crc kubenswrapper[4867]: I0214 04:47:07.967134 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hthk2" event={"ID":"709ab839-d449-4265-b59d-192b93a2039a","Type":"ContainerStarted","Data":"a4e8fb29ae930f04a0251f3a78aea8d3dffb6c99123a0596e532b157ffc496e9"} Feb 14 04:47:07 crc kubenswrapper[4867]: I0214 04:47:07.992393 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hthk2" podStartSLOduration=3.50929959 podStartE2EDuration="7.992375246s" podCreationTimestamp="2026-02-14 04:47:00 +0000 UTC" firstStartedPulling="2026-02-14 04:47:02.915980194 +0000 UTC m=+2254.996917498" lastFinishedPulling="2026-02-14 04:47:07.39905584 +0000 UTC m=+2259.479993154" observedRunningTime="2026-02-14 04:47:07.987446267 +0000 UTC m=+2260.068383591" watchObservedRunningTime="2026-02-14 04:47:07.992375246 +0000 UTC m=+2260.073312550" Feb 14 04:47:11 crc kubenswrapper[4867]: I0214 04:47:11.083238 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:11 crc kubenswrapper[4867]: I0214 04:47:11.083948 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:11 crc kubenswrapper[4867]: I0214 04:47:11.164597 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:12 crc kubenswrapper[4867]: I0214 04:47:12.054559 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:12 crc kubenswrapper[4867]: I0214 04:47:12.140369 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hthk2"] Feb 14 04:47:14 crc kubenswrapper[4867]: I0214 04:47:14.025014 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hthk2" podUID="709ab839-d449-4265-b59d-192b93a2039a" containerName="registry-server" containerID="cri-o://a4e8fb29ae930f04a0251f3a78aea8d3dffb6c99123a0596e532b157ffc496e9" gracePeriod=2 Feb 14 04:47:14 crc kubenswrapper[4867]: I0214 04:47:14.551756 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:14 crc kubenswrapper[4867]: I0214 04:47:14.688687 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hv7q\" (UniqueName: \"kubernetes.io/projected/709ab839-d449-4265-b59d-192b93a2039a-kube-api-access-9hv7q\") pod \"709ab839-d449-4265-b59d-192b93a2039a\" (UID: \"709ab839-d449-4265-b59d-192b93a2039a\") " Feb 14 04:47:14 crc kubenswrapper[4867]: I0214 04:47:14.688862 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/709ab839-d449-4265-b59d-192b93a2039a-utilities\") pod \"709ab839-d449-4265-b59d-192b93a2039a\" (UID: \"709ab839-d449-4265-b59d-192b93a2039a\") " Feb 14 04:47:14 crc kubenswrapper[4867]: I0214 04:47:14.689051 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/709ab839-d449-4265-b59d-192b93a2039a-catalog-content\") pod \"709ab839-d449-4265-b59d-192b93a2039a\" (UID: \"709ab839-d449-4265-b59d-192b93a2039a\") " Feb 14 04:47:14 crc kubenswrapper[4867]: I0214 04:47:14.689691 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/709ab839-d449-4265-b59d-192b93a2039a-utilities" (OuterVolumeSpecName: "utilities") pod "709ab839-d449-4265-b59d-192b93a2039a" (UID: "709ab839-d449-4265-b59d-192b93a2039a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:47:14 crc kubenswrapper[4867]: I0214 04:47:14.689954 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/709ab839-d449-4265-b59d-192b93a2039a-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:14 crc kubenswrapper[4867]: I0214 04:47:14.704342 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/709ab839-d449-4265-b59d-192b93a2039a-kube-api-access-9hv7q" (OuterVolumeSpecName: "kube-api-access-9hv7q") pod "709ab839-d449-4265-b59d-192b93a2039a" (UID: "709ab839-d449-4265-b59d-192b93a2039a"). InnerVolumeSpecName "kube-api-access-9hv7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:47:14 crc kubenswrapper[4867]: I0214 04:47:14.750790 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/709ab839-d449-4265-b59d-192b93a2039a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "709ab839-d449-4265-b59d-192b93a2039a" (UID: "709ab839-d449-4265-b59d-192b93a2039a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:47:14 crc kubenswrapper[4867]: I0214 04:47:14.792891 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hv7q\" (UniqueName: \"kubernetes.io/projected/709ab839-d449-4265-b59d-192b93a2039a-kube-api-access-9hv7q\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:14 crc kubenswrapper[4867]: I0214 04:47:14.792927 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/709ab839-d449-4265-b59d-192b93a2039a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:15 crc kubenswrapper[4867]: I0214 04:47:15.039471 4867 generic.go:334] "Generic (PLEG): container finished" podID="709ab839-d449-4265-b59d-192b93a2039a" containerID="a4e8fb29ae930f04a0251f3a78aea8d3dffb6c99123a0596e532b157ffc496e9" exitCode=0 Feb 14 04:47:15 crc kubenswrapper[4867]: I0214 04:47:15.039545 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hthk2" event={"ID":"709ab839-d449-4265-b59d-192b93a2039a","Type":"ContainerDied","Data":"a4e8fb29ae930f04a0251f3a78aea8d3dffb6c99123a0596e532b157ffc496e9"} Feb 14 04:47:15 crc kubenswrapper[4867]: I0214 04:47:15.039566 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hthk2" Feb 14 04:47:15 crc kubenswrapper[4867]: I0214 04:47:15.039591 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hthk2" event={"ID":"709ab839-d449-4265-b59d-192b93a2039a","Type":"ContainerDied","Data":"79d7515378363ed08b88377b44a53b803ef90ae278e3a0dcb05f423c876bc5f3"} Feb 14 04:47:15 crc kubenswrapper[4867]: I0214 04:47:15.039623 4867 scope.go:117] "RemoveContainer" containerID="a4e8fb29ae930f04a0251f3a78aea8d3dffb6c99123a0596e532b157ffc496e9" Feb 14 04:47:15 crc kubenswrapper[4867]: I0214 04:47:15.074992 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hthk2"] Feb 14 04:47:15 crc kubenswrapper[4867]: I0214 04:47:15.075474 4867 scope.go:117] "RemoveContainer" containerID="5847f981bdfeee05bd39dc4e5dfc6eb0764d7c1bb29f2a7b3006a95305dccd2b" Feb 14 04:47:15 crc kubenswrapper[4867]: I0214 04:47:15.087293 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hthk2"] Feb 14 04:47:15 crc kubenswrapper[4867]: I0214 04:47:15.104663 4867 scope.go:117] "RemoveContainer" containerID="c21c0c4fbc22f9cbe9392d488a21774797ea11e5926859a442df01ad36339416" Feb 14 04:47:15 crc kubenswrapper[4867]: I0214 04:47:15.165892 4867 scope.go:117] "RemoveContainer" containerID="a4e8fb29ae930f04a0251f3a78aea8d3dffb6c99123a0596e532b157ffc496e9" Feb 14 04:47:15 crc kubenswrapper[4867]: E0214 04:47:15.166378 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4e8fb29ae930f04a0251f3a78aea8d3dffb6c99123a0596e532b157ffc496e9\": container with ID starting with a4e8fb29ae930f04a0251f3a78aea8d3dffb6c99123a0596e532b157ffc496e9 not found: ID does not exist" containerID="a4e8fb29ae930f04a0251f3a78aea8d3dffb6c99123a0596e532b157ffc496e9" Feb 14 04:47:15 crc kubenswrapper[4867]: I0214 04:47:15.166432 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4e8fb29ae930f04a0251f3a78aea8d3dffb6c99123a0596e532b157ffc496e9"} err="failed to get container status \"a4e8fb29ae930f04a0251f3a78aea8d3dffb6c99123a0596e532b157ffc496e9\": rpc error: code = NotFound desc = could not find container \"a4e8fb29ae930f04a0251f3a78aea8d3dffb6c99123a0596e532b157ffc496e9\": container with ID starting with a4e8fb29ae930f04a0251f3a78aea8d3dffb6c99123a0596e532b157ffc496e9 not found: ID does not exist" Feb 14 04:47:15 crc kubenswrapper[4867]: I0214 04:47:15.166461 4867 scope.go:117] "RemoveContainer" containerID="5847f981bdfeee05bd39dc4e5dfc6eb0764d7c1bb29f2a7b3006a95305dccd2b" Feb 14 04:47:15 crc kubenswrapper[4867]: E0214 04:47:15.166958 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5847f981bdfeee05bd39dc4e5dfc6eb0764d7c1bb29f2a7b3006a95305dccd2b\": container with ID starting with 5847f981bdfeee05bd39dc4e5dfc6eb0764d7c1bb29f2a7b3006a95305dccd2b not found: ID does not exist" containerID="5847f981bdfeee05bd39dc4e5dfc6eb0764d7c1bb29f2a7b3006a95305dccd2b" Feb 14 04:47:15 crc kubenswrapper[4867]: I0214 04:47:15.167005 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5847f981bdfeee05bd39dc4e5dfc6eb0764d7c1bb29f2a7b3006a95305dccd2b"} err="failed to get container status \"5847f981bdfeee05bd39dc4e5dfc6eb0764d7c1bb29f2a7b3006a95305dccd2b\": rpc error: code = NotFound desc = could not find container \"5847f981bdfeee05bd39dc4e5dfc6eb0764d7c1bb29f2a7b3006a95305dccd2b\": container with ID starting with 5847f981bdfeee05bd39dc4e5dfc6eb0764d7c1bb29f2a7b3006a95305dccd2b not found: ID does not exist" Feb 14 04:47:15 crc kubenswrapper[4867]: I0214 04:47:15.167034 4867 scope.go:117] "RemoveContainer" containerID="c21c0c4fbc22f9cbe9392d488a21774797ea11e5926859a442df01ad36339416" Feb 14 04:47:15 crc kubenswrapper[4867]: E0214 04:47:15.167459 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c21c0c4fbc22f9cbe9392d488a21774797ea11e5926859a442df01ad36339416\": container with ID starting with c21c0c4fbc22f9cbe9392d488a21774797ea11e5926859a442df01ad36339416 not found: ID does not exist" containerID="c21c0c4fbc22f9cbe9392d488a21774797ea11e5926859a442df01ad36339416" Feb 14 04:47:15 crc kubenswrapper[4867]: I0214 04:47:15.167481 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c21c0c4fbc22f9cbe9392d488a21774797ea11e5926859a442df01ad36339416"} err="failed to get container status \"c21c0c4fbc22f9cbe9392d488a21774797ea11e5926859a442df01ad36339416\": rpc error: code = NotFound desc = could not find container \"c21c0c4fbc22f9cbe9392d488a21774797ea11e5926859a442df01ad36339416\": container with ID starting with c21c0c4fbc22f9cbe9392d488a21774797ea11e5926859a442df01ad36339416 not found: ID does not exist" Feb 14 04:47:15 crc kubenswrapper[4867]: I0214 04:47:15.352296 4867 scope.go:117] "RemoveContainer" containerID="de721f6c491679859a0694193254d070c18018a3dbb5ddc13f5e6825aefb8ef2" Feb 14 04:47:17 crc kubenswrapper[4867]: I0214 04:47:17.009861 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="709ab839-d449-4265-b59d-192b93a2039a" path="/var/lib/kubelet/pods/709ab839-d449-4265-b59d-192b93a2039a/volumes" Feb 14 04:47:31 crc kubenswrapper[4867]: I0214 04:47:31.251169 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:47:31 crc kubenswrapper[4867]: I0214 04:47:31.251682 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:47:31 crc kubenswrapper[4867]: I0214 04:47:31.251731 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:47:31 crc kubenswrapper[4867]: I0214 04:47:31.254530 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 04:47:31 crc kubenswrapper[4867]: I0214 04:47:31.254609 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" gracePeriod=600 Feb 14 04:47:31 crc kubenswrapper[4867]: E0214 04:47:31.378867 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:47:32 crc kubenswrapper[4867]: I0214 04:47:32.229441 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" exitCode=0 Feb 14 04:47:32 crc kubenswrapper[4867]: I0214 04:47:32.229542 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2"} Feb 14 04:47:32 crc kubenswrapper[4867]: I0214 04:47:32.229783 4867 scope.go:117] "RemoveContainer" containerID="8ef22e983ed33de6916be45630c900d98abc980cea24a0e66ba99e9fbf263b65" Feb 14 04:47:32 crc kubenswrapper[4867]: I0214 04:47:32.235181 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:47:32 crc kubenswrapper[4867]: E0214 04:47:32.236216 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:47:36 crc kubenswrapper[4867]: I0214 04:47:36.272193 4867 generic.go:334] "Generic (PLEG): container finished" podID="01cb12dd-9d34-4898-941a-05635d21630f" containerID="eb50eb14eba880c0f518af2dcfcdf4cf46735bb1f20af3d0acff7d38753ef4e0" exitCode=0 Feb 14 04:47:36 crc kubenswrapper[4867]: I0214 04:47:36.272283 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" event={"ID":"01cb12dd-9d34-4898-941a-05635d21630f","Type":"ContainerDied","Data":"eb50eb14eba880c0f518af2dcfcdf4cf46735bb1f20af3d0acff7d38753ef4e0"} Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.791705 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.900599 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-telemetry-combined-ca-bundle\") pod \"01cb12dd-9d34-4898-941a-05635d21630f\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.900690 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-ovn-combined-ca-bundle\") pod \"01cb12dd-9d34-4898-941a-05635d21630f\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.900761 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-bootstrap-combined-ca-bundle\") pod \"01cb12dd-9d34-4898-941a-05635d21630f\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.900792 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-nova-combined-ca-bundle\") pod \"01cb12dd-9d34-4898-941a-05635d21630f\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.900917 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"01cb12dd-9d34-4898-941a-05635d21630f\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.900961 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"01cb12dd-9d34-4898-941a-05635d21630f\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.901100 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"01cb12dd-9d34-4898-941a-05635d21630f\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.901166 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-inventory\") pod \"01cb12dd-9d34-4898-941a-05635d21630f\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.901304 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-neutron-metadata-combined-ca-bundle\") pod \"01cb12dd-9d34-4898-941a-05635d21630f\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.901347 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-repo-setup-combined-ca-bundle\") pod \"01cb12dd-9d34-4898-941a-05635d21630f\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.901391 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"01cb12dd-9d34-4898-941a-05635d21630f\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.901418 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7lqb\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-kube-api-access-j7lqb\") pod \"01cb12dd-9d34-4898-941a-05635d21630f\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.901488 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-ssh-key-openstack-edpm-ipam\") pod \"01cb12dd-9d34-4898-941a-05635d21630f\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.901546 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-telemetry-power-monitoring-combined-ca-bundle\") pod \"01cb12dd-9d34-4898-941a-05635d21630f\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.901680 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-libvirt-combined-ca-bundle\") pod \"01cb12dd-9d34-4898-941a-05635d21630f\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.901756 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"01cb12dd-9d34-4898-941a-05635d21630f\" (UID: \"01cb12dd-9d34-4898-941a-05635d21630f\") " Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.908992 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "01cb12dd-9d34-4898-941a-05635d21630f" (UID: "01cb12dd-9d34-4898-941a-05635d21630f"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.909027 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "01cb12dd-9d34-4898-941a-05635d21630f" (UID: "01cb12dd-9d34-4898-941a-05635d21630f"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.909546 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "01cb12dd-9d34-4898-941a-05635d21630f" (UID: "01cb12dd-9d34-4898-941a-05635d21630f"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.909962 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "01cb12dd-9d34-4898-941a-05635d21630f" (UID: "01cb12dd-9d34-4898-941a-05635d21630f"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.910551 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-kube-api-access-j7lqb" (OuterVolumeSpecName: "kube-api-access-j7lqb") pod "01cb12dd-9d34-4898-941a-05635d21630f" (UID: "01cb12dd-9d34-4898-941a-05635d21630f"). InnerVolumeSpecName "kube-api-access-j7lqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.914185 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "01cb12dd-9d34-4898-941a-05635d21630f" (UID: "01cb12dd-9d34-4898-941a-05635d21630f"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.914309 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "01cb12dd-9d34-4898-941a-05635d21630f" (UID: "01cb12dd-9d34-4898-941a-05635d21630f"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.914456 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "01cb12dd-9d34-4898-941a-05635d21630f" (UID: "01cb12dd-9d34-4898-941a-05635d21630f"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.914881 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "01cb12dd-9d34-4898-941a-05635d21630f" (UID: "01cb12dd-9d34-4898-941a-05635d21630f"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.914458 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "01cb12dd-9d34-4898-941a-05635d21630f" (UID: "01cb12dd-9d34-4898-941a-05635d21630f"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.916278 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "01cb12dd-9d34-4898-941a-05635d21630f" (UID: "01cb12dd-9d34-4898-941a-05635d21630f"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.916402 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "01cb12dd-9d34-4898-941a-05635d21630f" (UID: "01cb12dd-9d34-4898-941a-05635d21630f"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.917458 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "01cb12dd-9d34-4898-941a-05635d21630f" (UID: "01cb12dd-9d34-4898-941a-05635d21630f"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.921535 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "01cb12dd-9d34-4898-941a-05635d21630f" (UID: "01cb12dd-9d34-4898-941a-05635d21630f"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.948407 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "01cb12dd-9d34-4898-941a-05635d21630f" (UID: "01cb12dd-9d34-4898-941a-05635d21630f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:47:37 crc kubenswrapper[4867]: I0214 04:47:37.954299 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-inventory" (OuterVolumeSpecName: "inventory") pod "01cb12dd-9d34-4898-941a-05635d21630f" (UID: "01cb12dd-9d34-4898-941a-05635d21630f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.005452 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.005519 4867 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.005531 4867 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.005544 4867 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.005558 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7lqb\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-kube-api-access-j7lqb\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.005570 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.005582 4867 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.005594 4867 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.005604 4867 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.005614 4867 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.005623 4867 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.005631 4867 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.005639 4867 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01cb12dd-9d34-4898-941a-05635d21630f-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.005647 4867 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.005672 4867 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.005684 4867 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/01cb12dd-9d34-4898-941a-05635d21630f-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.298961 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" event={"ID":"01cb12dd-9d34-4898-941a-05635d21630f","Type":"ContainerDied","Data":"711f4fe27cebbb2e6c84267ccd7dca6591c48e5cf8880040abe090f7f6d2f6eb"} Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.299009 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="711f4fe27cebbb2e6c84267ccd7dca6591c48e5cf8880040abe090f7f6d2f6eb" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.299070 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.436947 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q"] Feb 14 04:47:38 crc kubenswrapper[4867]: E0214 04:47:38.437836 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="709ab839-d449-4265-b59d-192b93a2039a" containerName="extract-utilities" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.437863 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="709ab839-d449-4265-b59d-192b93a2039a" containerName="extract-utilities" Feb 14 04:47:38 crc kubenswrapper[4867]: E0214 04:47:38.437886 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="709ab839-d449-4265-b59d-192b93a2039a" containerName="registry-server" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.437894 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="709ab839-d449-4265-b59d-192b93a2039a" containerName="registry-server" Feb 14 04:47:38 crc kubenswrapper[4867]: E0214 04:47:38.437941 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01cb12dd-9d34-4898-941a-05635d21630f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.437949 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="01cb12dd-9d34-4898-941a-05635d21630f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 14 04:47:38 crc kubenswrapper[4867]: E0214 04:47:38.437959 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="709ab839-d449-4265-b59d-192b93a2039a" containerName="extract-content" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.437965 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="709ab839-d449-4265-b59d-192b93a2039a" containerName="extract-content" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.438181 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="709ab839-d449-4265-b59d-192b93a2039a" containerName="registry-server" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.438206 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="01cb12dd-9d34-4898-941a-05635d21630f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.439101 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.444294 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.444486 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.444686 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.444962 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.445111 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.450188 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q"] Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.518798 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vjz5q\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.518901 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vjz5q\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.518931 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vjz5q\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.518971 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vjz5q\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.518989 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h75f6\" (UniqueName: \"kubernetes.io/projected/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-kube-api-access-h75f6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vjz5q\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.621597 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vjz5q\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.621977 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vjz5q\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.622112 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vjz5q\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.622192 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h75f6\" (UniqueName: \"kubernetes.io/projected/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-kube-api-access-h75f6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vjz5q\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.622486 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vjz5q\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.623122 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vjz5q\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.625216 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vjz5q\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.625597 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vjz5q\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.627343 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vjz5q\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.649688 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h75f6\" (UniqueName: \"kubernetes.io/projected/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-kube-api-access-h75f6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-vjz5q\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:38 crc kubenswrapper[4867]: I0214 04:47:38.762634 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:47:39 crc kubenswrapper[4867]: I0214 04:47:39.436355 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q"] Feb 14 04:47:40 crc kubenswrapper[4867]: I0214 04:47:40.323525 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" event={"ID":"c3ef84d6-150a-46b1-8e93-7e650c8be1ef","Type":"ContainerStarted","Data":"c3aca2cdbcd4a8b8f806a2e110ffaff4465e241413b62d01832043305d4c81af"} Feb 14 04:47:40 crc kubenswrapper[4867]: I0214 04:47:40.323931 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" event={"ID":"c3ef84d6-150a-46b1-8e93-7e650c8be1ef","Type":"ContainerStarted","Data":"3e3aaf41c3c873c5a763e6d91f73ddb83d4d8bf709983155009d70de531c985d"} Feb 14 04:47:40 crc kubenswrapper[4867]: I0214 04:47:40.388099 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" podStartSLOduration=1.854590189 podStartE2EDuration="2.38806498s" podCreationTimestamp="2026-02-14 04:47:38 +0000 UTC" firstStartedPulling="2026-02-14 04:47:39.438737493 +0000 UTC m=+2291.519674807" lastFinishedPulling="2026-02-14 04:47:39.972212244 +0000 UTC m=+2292.053149598" observedRunningTime="2026-02-14 04:47:40.359974241 +0000 UTC m=+2292.440911565" watchObservedRunningTime="2026-02-14 04:47:40.38806498 +0000 UTC m=+2292.469002294" Feb 14 04:47:43 crc kubenswrapper[4867]: I0214 04:47:43.997744 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:47:43 crc kubenswrapper[4867]: E0214 04:47:43.998552 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:47:46 crc kubenswrapper[4867]: I0214 04:47:46.050697 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-vgdj4"] Feb 14 04:47:46 crc kubenswrapper[4867]: I0214 04:47:46.061756 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-vgdj4"] Feb 14 04:47:47 crc kubenswrapper[4867]: I0214 04:47:47.015994 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="844735e8-e1c8-426f-8f5b-ce4f64e2ffbf" path="/var/lib/kubelet/pods/844735e8-e1c8-426f-8f5b-ce4f64e2ffbf/volumes" Feb 14 04:47:54 crc kubenswrapper[4867]: I0214 04:47:54.997446 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:47:54 crc kubenswrapper[4867]: E0214 04:47:54.998156 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:48:05 crc kubenswrapper[4867]: I0214 04:48:05.998375 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:48:06 crc kubenswrapper[4867]: E0214 04:48:05.999473 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:48:15 crc kubenswrapper[4867]: I0214 04:48:15.474127 4867 scope.go:117] "RemoveContainer" containerID="fe59d6a45b3b1f49664971d341b7fc6d30fef719063bc033373a5e6d9bd21e9a" Feb 14 04:48:16 crc kubenswrapper[4867]: I0214 04:48:16.998901 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:48:17 crc kubenswrapper[4867]: E0214 04:48:16.999424 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:48:27 crc kubenswrapper[4867]: I0214 04:48:27.997698 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:48:27 crc kubenswrapper[4867]: E0214 04:48:27.998614 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:48:37 crc kubenswrapper[4867]: I0214 04:48:37.960584 4867 generic.go:334] "Generic (PLEG): container finished" podID="c3ef84d6-150a-46b1-8e93-7e650c8be1ef" containerID="c3aca2cdbcd4a8b8f806a2e110ffaff4465e241413b62d01832043305d4c81af" exitCode=0 Feb 14 04:48:37 crc kubenswrapper[4867]: I0214 04:48:37.960720 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" event={"ID":"c3ef84d6-150a-46b1-8e93-7e650c8be1ef","Type":"ContainerDied","Data":"c3aca2cdbcd4a8b8f806a2e110ffaff4465e241413b62d01832043305d4c81af"} Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.515728 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.639030 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h75f6\" (UniqueName: \"kubernetes.io/projected/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-kube-api-access-h75f6\") pod \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.639322 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-inventory\") pod \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.639360 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ssh-key-openstack-edpm-ipam\") pod \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.639413 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ovn-combined-ca-bundle\") pod \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.639475 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ovncontroller-config-0\") pod \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\" (UID: \"c3ef84d6-150a-46b1-8e93-7e650c8be1ef\") " Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.645937 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "c3ef84d6-150a-46b1-8e93-7e650c8be1ef" (UID: "c3ef84d6-150a-46b1-8e93-7e650c8be1ef"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.645957 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-kube-api-access-h75f6" (OuterVolumeSpecName: "kube-api-access-h75f6") pod "c3ef84d6-150a-46b1-8e93-7e650c8be1ef" (UID: "c3ef84d6-150a-46b1-8e93-7e650c8be1ef"). InnerVolumeSpecName "kube-api-access-h75f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.672211 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-inventory" (OuterVolumeSpecName: "inventory") pod "c3ef84d6-150a-46b1-8e93-7e650c8be1ef" (UID: "c3ef84d6-150a-46b1-8e93-7e650c8be1ef"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.675105 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c3ef84d6-150a-46b1-8e93-7e650c8be1ef" (UID: "c3ef84d6-150a-46b1-8e93-7e650c8be1ef"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.679717 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "c3ef84d6-150a-46b1-8e93-7e650c8be1ef" (UID: "c3ef84d6-150a-46b1-8e93-7e650c8be1ef"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.742419 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h75f6\" (UniqueName: \"kubernetes.io/projected/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-kube-api-access-h75f6\") on node \"crc\" DevicePath \"\"" Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.742464 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.742476 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.742486 4867 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.742496 4867 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c3ef84d6-150a-46b1-8e93-7e650c8be1ef-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.981136 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" event={"ID":"c3ef84d6-150a-46b1-8e93-7e650c8be1ef","Type":"ContainerDied","Data":"3e3aaf41c3c873c5a763e6d91f73ddb83d4d8bf709983155009d70de531c985d"} Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.981402 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e3aaf41c3c873c5a763e6d91f73ddb83d4d8bf709983155009d70de531c985d" Feb 14 04:48:39 crc kubenswrapper[4867]: I0214 04:48:39.981531 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-vjz5q" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.125998 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m"] Feb 14 04:48:40 crc kubenswrapper[4867]: E0214 04:48:40.126811 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3ef84d6-150a-46b1-8e93-7e650c8be1ef" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.126829 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3ef84d6-150a-46b1-8e93-7e650c8be1ef" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.127069 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3ef84d6-150a-46b1-8e93-7e650c8be1ef" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.128189 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.130566 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.130786 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.130945 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.131140 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.131307 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.131529 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.141943 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m"] Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.158984 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.159175 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.159230 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.159324 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.159639 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqghv\" (UniqueName: \"kubernetes.io/projected/d07bc498-5b6c-465a-bda2-df814e9c19c8-kube-api-access-jqghv\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.159676 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.262768 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.263113 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.263245 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.263425 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqghv\" (UniqueName: \"kubernetes.io/projected/d07bc498-5b6c-465a-bda2-df814e9c19c8-kube-api-access-jqghv\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.263507 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.263707 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.266973 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.266983 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.267964 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.268539 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.270082 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.284102 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqghv\" (UniqueName: \"kubernetes.io/projected/d07bc498-5b6c-465a-bda2-df814e9c19c8-kube-api-access-jqghv\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:40 crc kubenswrapper[4867]: I0214 04:48:40.445829 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:48:41 crc kubenswrapper[4867]: I0214 04:48:41.050315 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m"] Feb 14 04:48:42 crc kubenswrapper[4867]: I0214 04:48:42.008205 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" event={"ID":"d07bc498-5b6c-465a-bda2-df814e9c19c8","Type":"ContainerStarted","Data":"0e52fe21a2c715c09a621b92707814df326780c9f866675ff4fcb182f274d170"} Feb 14 04:48:42 crc kubenswrapper[4867]: I0214 04:48:42.008960 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" event={"ID":"d07bc498-5b6c-465a-bda2-df814e9c19c8","Type":"ContainerStarted","Data":"1b02f1096ca26a8110685cbd032274948f4f92a37e4ba6e7f6eb9573c02dd7c1"} Feb 14 04:48:42 crc kubenswrapper[4867]: I0214 04:48:42.035642 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" podStartSLOduration=1.56610504 podStartE2EDuration="2.035619818s" podCreationTimestamp="2026-02-14 04:48:40 +0000 UTC" firstStartedPulling="2026-02-14 04:48:41.061457349 +0000 UTC m=+2353.142394673" lastFinishedPulling="2026-02-14 04:48:41.530972137 +0000 UTC m=+2353.611909451" observedRunningTime="2026-02-14 04:48:42.024141806 +0000 UTC m=+2354.105079140" watchObservedRunningTime="2026-02-14 04:48:42.035619818 +0000 UTC m=+2354.116557132" Feb 14 04:48:42 crc kubenswrapper[4867]: I0214 04:48:42.998315 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:48:42 crc kubenswrapper[4867]: E0214 04:48:42.998718 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:48:54 crc kubenswrapper[4867]: I0214 04:48:54.998473 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:48:54 crc kubenswrapper[4867]: E0214 04:48:54.999563 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:49:05 crc kubenswrapper[4867]: I0214 04:49:05.997356 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:49:05 crc kubenswrapper[4867]: E0214 04:49:05.998241 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:49:19 crc kubenswrapper[4867]: I0214 04:49:19.998245 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:49:20 crc kubenswrapper[4867]: E0214 04:49:20.000285 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:49:26 crc kubenswrapper[4867]: I0214 04:49:26.685105 4867 generic.go:334] "Generic (PLEG): container finished" podID="d07bc498-5b6c-465a-bda2-df814e9c19c8" containerID="0e52fe21a2c715c09a621b92707814df326780c9f866675ff4fcb182f274d170" exitCode=0 Feb 14 04:49:26 crc kubenswrapper[4867]: I0214 04:49:26.685244 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" event={"ID":"d07bc498-5b6c-465a-bda2-df814e9c19c8","Type":"ContainerDied","Data":"0e52fe21a2c715c09a621b92707814df326780c9f866675ff4fcb182f274d170"} Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.272331 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.431195 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-neutron-ovn-metadata-agent-neutron-config-0\") pod \"d07bc498-5b6c-465a-bda2-df814e9c19c8\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.431289 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-ssh-key-openstack-edpm-ipam\") pod \"d07bc498-5b6c-465a-bda2-df814e9c19c8\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.431380 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-nova-metadata-neutron-config-0\") pod \"d07bc498-5b6c-465a-bda2-df814e9c19c8\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.431401 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-neutron-metadata-combined-ca-bundle\") pod \"d07bc498-5b6c-465a-bda2-df814e9c19c8\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.431422 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-inventory\") pod \"d07bc498-5b6c-465a-bda2-df814e9c19c8\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.431519 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqghv\" (UniqueName: \"kubernetes.io/projected/d07bc498-5b6c-465a-bda2-df814e9c19c8-kube-api-access-jqghv\") pod \"d07bc498-5b6c-465a-bda2-df814e9c19c8\" (UID: \"d07bc498-5b6c-465a-bda2-df814e9c19c8\") " Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.437219 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d07bc498-5b6c-465a-bda2-df814e9c19c8-kube-api-access-jqghv" (OuterVolumeSpecName: "kube-api-access-jqghv") pod "d07bc498-5b6c-465a-bda2-df814e9c19c8" (UID: "d07bc498-5b6c-465a-bda2-df814e9c19c8"). InnerVolumeSpecName "kube-api-access-jqghv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.438009 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "d07bc498-5b6c-465a-bda2-df814e9c19c8" (UID: "d07bc498-5b6c-465a-bda2-df814e9c19c8"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.465132 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-inventory" (OuterVolumeSpecName: "inventory") pod "d07bc498-5b6c-465a-bda2-df814e9c19c8" (UID: "d07bc498-5b6c-465a-bda2-df814e9c19c8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.465318 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "d07bc498-5b6c-465a-bda2-df814e9c19c8" (UID: "d07bc498-5b6c-465a-bda2-df814e9c19c8"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.466125 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d07bc498-5b6c-465a-bda2-df814e9c19c8" (UID: "d07bc498-5b6c-465a-bda2-df814e9c19c8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.485342 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "d07bc498-5b6c-465a-bda2-df814e9c19c8" (UID: "d07bc498-5b6c-465a-bda2-df814e9c19c8"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.534636 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqghv\" (UniqueName: \"kubernetes.io/projected/d07bc498-5b6c-465a-bda2-df814e9c19c8-kube-api-access-jqghv\") on node \"crc\" DevicePath \"\"" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.534672 4867 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.534683 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.534693 4867 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.534702 4867 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.534713 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d07bc498-5b6c-465a-bda2-df814e9c19c8-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.715151 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" event={"ID":"d07bc498-5b6c-465a-bda2-df814e9c19c8","Type":"ContainerDied","Data":"1b02f1096ca26a8110685cbd032274948f4f92a37e4ba6e7f6eb9573c02dd7c1"} Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.715205 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b02f1096ca26a8110685cbd032274948f4f92a37e4ba6e7f6eb9573c02dd7c1" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.715250 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.803736 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p"] Feb 14 04:49:28 crc kubenswrapper[4867]: E0214 04:49:28.804521 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d07bc498-5b6c-465a-bda2-df814e9c19c8" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.804564 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d07bc498-5b6c-465a-bda2-df814e9c19c8" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.804878 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="d07bc498-5b6c-465a-bda2-df814e9c19c8" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.805955 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.809032 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.809700 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.809843 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.809905 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.810038 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.817141 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p"] Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.840953 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.841378 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.841620 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.841891 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.842136 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dbfj\" (UniqueName: \"kubernetes.io/projected/8ec3156c-bcce-4dee-8ce5-7773409e880e-kube-api-access-5dbfj\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.943516 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.943585 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.943642 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.943689 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dbfj\" (UniqueName: \"kubernetes.io/projected/8ec3156c-bcce-4dee-8ce5-7773409e880e-kube-api-access-5dbfj\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.943768 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.948790 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.950468 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.951170 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.952300 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:28 crc kubenswrapper[4867]: I0214 04:49:28.963746 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dbfj\" (UniqueName: \"kubernetes.io/projected/8ec3156c-bcce-4dee-8ce5-7773409e880e-kube-api-access-5dbfj\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:29 crc kubenswrapper[4867]: I0214 04:49:29.125747 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:49:29 crc kubenswrapper[4867]: I0214 04:49:29.699201 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p"] Feb 14 04:49:29 crc kubenswrapper[4867]: I0214 04:49:29.728662 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" event={"ID":"8ec3156c-bcce-4dee-8ce5-7773409e880e","Type":"ContainerStarted","Data":"ead84d037c9a6e54041b73f21829e99b2ee13151d4361f6e2bdce7250f6d25ba"} Feb 14 04:49:30 crc kubenswrapper[4867]: I0214 04:49:30.113564 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:49:30 crc kubenswrapper[4867]: I0214 04:49:30.739336 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" event={"ID":"8ec3156c-bcce-4dee-8ce5-7773409e880e","Type":"ContainerStarted","Data":"9ccd9192f7366e861c0e4af53d462de7f1641852a4ce5ef2f14ad11d0dfe79e4"} Feb 14 04:49:30 crc kubenswrapper[4867]: I0214 04:49:30.765559 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" podStartSLOduration=2.364376468 podStartE2EDuration="2.765499687s" podCreationTimestamp="2026-02-14 04:49:28 +0000 UTC" firstStartedPulling="2026-02-14 04:49:29.708349837 +0000 UTC m=+2401.789287151" lastFinishedPulling="2026-02-14 04:49:30.109473056 +0000 UTC m=+2402.190410370" observedRunningTime="2026-02-14 04:49:30.762837717 +0000 UTC m=+2402.843775041" watchObservedRunningTime="2026-02-14 04:49:30.765499687 +0000 UTC m=+2402.846437031" Feb 14 04:49:34 crc kubenswrapper[4867]: I0214 04:49:34.997990 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:49:34 crc kubenswrapper[4867]: E0214 04:49:34.998731 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:49:46 crc kubenswrapper[4867]: I0214 04:49:46.998376 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:49:47 crc kubenswrapper[4867]: E0214 04:49:46.999345 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:49:59 crc kubenswrapper[4867]: I0214 04:49:59.998375 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:50:00 crc kubenswrapper[4867]: E0214 04:50:00.000575 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:50:12 crc kubenswrapper[4867]: I0214 04:50:12.998123 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:50:13 crc kubenswrapper[4867]: E0214 04:50:12.999400 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:50:26 crc kubenswrapper[4867]: I0214 04:50:26.997750 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:50:27 crc kubenswrapper[4867]: E0214 04:50:26.998697 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:50:39 crc kubenswrapper[4867]: I0214 04:50:39.006893 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:50:39 crc kubenswrapper[4867]: E0214 04:50:39.007731 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:50:49 crc kubenswrapper[4867]: I0214 04:50:49.996948 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:50:49 crc kubenswrapper[4867]: E0214 04:50:49.997795 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:51:03 crc kubenswrapper[4867]: I0214 04:51:03.997829 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:51:04 crc kubenswrapper[4867]: E0214 04:51:03.998766 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:51:19 crc kubenswrapper[4867]: I0214 04:51:19.004744 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:51:19 crc kubenswrapper[4867]: E0214 04:51:19.005924 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:51:33 crc kubenswrapper[4867]: I0214 04:51:33.998072 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:51:34 crc kubenswrapper[4867]: E0214 04:51:34.002192 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:51:45 crc kubenswrapper[4867]: I0214 04:51:45.996864 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:51:45 crc kubenswrapper[4867]: E0214 04:51:45.997766 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:52:00 crc kubenswrapper[4867]: I0214 04:52:00.998607 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:52:01 crc kubenswrapper[4867]: E0214 04:52:00.999481 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:52:14 crc kubenswrapper[4867]: I0214 04:52:14.998391 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:52:15 crc kubenswrapper[4867]: E0214 04:52:14.999257 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:52:29 crc kubenswrapper[4867]: I0214 04:52:29.997565 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:52:29 crc kubenswrapper[4867]: E0214 04:52:29.998587 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:52:44 crc kubenswrapper[4867]: I0214 04:52:44.997745 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:52:45 crc kubenswrapper[4867]: I0214 04:52:45.890602 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"e1b89ddb8a2754137d33a14676d4e33653c306a715ebb64010e116482bf849b7"} Feb 14 04:53:18 crc kubenswrapper[4867]: I0214 04:53:18.249987 4867 generic.go:334] "Generic (PLEG): container finished" podID="8ec3156c-bcce-4dee-8ce5-7773409e880e" containerID="9ccd9192f7366e861c0e4af53d462de7f1641852a4ce5ef2f14ad11d0dfe79e4" exitCode=0 Feb 14 04:53:18 crc kubenswrapper[4867]: I0214 04:53:18.250163 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" event={"ID":"8ec3156c-bcce-4dee-8ce5-7773409e880e","Type":"ContainerDied","Data":"9ccd9192f7366e861c0e4af53d462de7f1641852a4ce5ef2f14ad11d0dfe79e4"} Feb 14 04:53:19 crc kubenswrapper[4867]: I0214 04:53:19.776990 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:53:19 crc kubenswrapper[4867]: I0214 04:53:19.873358 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-libvirt-combined-ca-bundle\") pod \"8ec3156c-bcce-4dee-8ce5-7773409e880e\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " Feb 14 04:53:19 crc kubenswrapper[4867]: I0214 04:53:19.874085 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-libvirt-secret-0\") pod \"8ec3156c-bcce-4dee-8ce5-7773409e880e\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " Feb 14 04:53:19 crc kubenswrapper[4867]: I0214 04:53:19.874413 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dbfj\" (UniqueName: \"kubernetes.io/projected/8ec3156c-bcce-4dee-8ce5-7773409e880e-kube-api-access-5dbfj\") pod \"8ec3156c-bcce-4dee-8ce5-7773409e880e\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " Feb 14 04:53:19 crc kubenswrapper[4867]: I0214 04:53:19.874687 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-ssh-key-openstack-edpm-ipam\") pod \"8ec3156c-bcce-4dee-8ce5-7773409e880e\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " Feb 14 04:53:19 crc kubenswrapper[4867]: I0214 04:53:19.874755 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-inventory\") pod \"8ec3156c-bcce-4dee-8ce5-7773409e880e\" (UID: \"8ec3156c-bcce-4dee-8ce5-7773409e880e\") " Feb 14 04:53:19 crc kubenswrapper[4867]: I0214 04:53:19.880550 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "8ec3156c-bcce-4dee-8ce5-7773409e880e" (UID: "8ec3156c-bcce-4dee-8ce5-7773409e880e"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:53:19 crc kubenswrapper[4867]: I0214 04:53:19.883122 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ec3156c-bcce-4dee-8ce5-7773409e880e-kube-api-access-5dbfj" (OuterVolumeSpecName: "kube-api-access-5dbfj") pod "8ec3156c-bcce-4dee-8ce5-7773409e880e" (UID: "8ec3156c-bcce-4dee-8ce5-7773409e880e"). InnerVolumeSpecName "kube-api-access-5dbfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:53:19 crc kubenswrapper[4867]: I0214 04:53:19.908174 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8ec3156c-bcce-4dee-8ce5-7773409e880e" (UID: "8ec3156c-bcce-4dee-8ce5-7773409e880e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:53:19 crc kubenswrapper[4867]: I0214 04:53:19.910538 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "8ec3156c-bcce-4dee-8ce5-7773409e880e" (UID: "8ec3156c-bcce-4dee-8ce5-7773409e880e"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:53:19 crc kubenswrapper[4867]: I0214 04:53:19.923141 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-inventory" (OuterVolumeSpecName: "inventory") pod "8ec3156c-bcce-4dee-8ce5-7773409e880e" (UID: "8ec3156c-bcce-4dee-8ce5-7773409e880e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:53:19 crc kubenswrapper[4867]: I0214 04:53:19.978281 4867 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:53:19 crc kubenswrapper[4867]: I0214 04:53:19.978327 4867 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:53:19 crc kubenswrapper[4867]: I0214 04:53:19.978341 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dbfj\" (UniqueName: \"kubernetes.io/projected/8ec3156c-bcce-4dee-8ce5-7773409e880e-kube-api-access-5dbfj\") on node \"crc\" DevicePath \"\"" Feb 14 04:53:19 crc kubenswrapper[4867]: I0214 04:53:19.978349 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:53:19 crc kubenswrapper[4867]: I0214 04:53:19.978359 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ec3156c-bcce-4dee-8ce5-7773409e880e-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.276642 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" event={"ID":"8ec3156c-bcce-4dee-8ce5-7773409e880e","Type":"ContainerDied","Data":"ead84d037c9a6e54041b73f21829e99b2ee13151d4361f6e2bdce7250f6d25ba"} Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.276672 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.276717 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ead84d037c9a6e54041b73f21829e99b2ee13151d4361f6e2bdce7250f6d25ba" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.401956 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4"] Feb 14 04:53:20 crc kubenswrapper[4867]: E0214 04:53:20.402632 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ec3156c-bcce-4dee-8ce5-7773409e880e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.402658 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ec3156c-bcce-4dee-8ce5-7773409e880e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.402962 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ec3156c-bcce-4dee-8ce5-7773409e880e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.404007 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.411334 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.411384 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.411595 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.411986 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.413314 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.415370 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.415390 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.440142 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4"] Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.493358 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.493410 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.493488 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.493553 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.493615 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.493639 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9ccz\" (UniqueName: \"kubernetes.io/projected/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-kube-api-access-p9ccz\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.493660 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.493756 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.493874 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.493913 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.493929 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.596046 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.596424 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.596553 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9ccz\" (UniqueName: \"kubernetes.io/projected/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-kube-api-access-p9ccz\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.596840 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.596972 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.597144 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.597260 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.597358 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.597688 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.597802 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.597971 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.598163 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.600467 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.600911 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.601019 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.601268 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.601354 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.602233 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.602406 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.608445 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.611344 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.619591 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9ccz\" (UniqueName: \"kubernetes.io/projected/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-kube-api-access-p9ccz\") pod \"nova-edpm-deployment-openstack-edpm-ipam-s5lc4\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:20 crc kubenswrapper[4867]: I0214 04:53:20.732419 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:53:21 crc kubenswrapper[4867]: I0214 04:53:21.318665 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4"] Feb 14 04:53:21 crc kubenswrapper[4867]: I0214 04:53:21.324244 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 04:53:22 crc kubenswrapper[4867]: I0214 04:53:22.299026 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" event={"ID":"8c3553e4-9d3b-4c1d-bbc3-35371d733c86","Type":"ContainerStarted","Data":"35ba4629751c3d1c99df22ad826fbdecb0b6da7011373c7fcf15710f10455091"} Feb 14 04:53:22 crc kubenswrapper[4867]: I0214 04:53:22.299343 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" event={"ID":"8c3553e4-9d3b-4c1d-bbc3-35371d733c86","Type":"ContainerStarted","Data":"22ea790dd323fc348f6fd0cafee4bad57f394f8293bdc77fe1ca0af9b1394a35"} Feb 14 04:53:22 crc kubenswrapper[4867]: I0214 04:53:22.320063 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" podStartSLOduration=1.669509908 podStartE2EDuration="2.320042793s" podCreationTimestamp="2026-02-14 04:53:20 +0000 UTC" firstStartedPulling="2026-02-14 04:53:21.324004312 +0000 UTC m=+2633.404941626" lastFinishedPulling="2026-02-14 04:53:21.974537197 +0000 UTC m=+2634.055474511" observedRunningTime="2026-02-14 04:53:22.316651063 +0000 UTC m=+2634.397588407" watchObservedRunningTime="2026-02-14 04:53:22.320042793 +0000 UTC m=+2634.400980107" Feb 14 04:54:36 crc kubenswrapper[4867]: I0214 04:54:36.314931 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-52dzz"] Feb 14 04:54:36 crc kubenswrapper[4867]: I0214 04:54:36.318899 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:36 crc kubenswrapper[4867]: I0214 04:54:36.325947 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-52dzz"] Feb 14 04:54:36 crc kubenswrapper[4867]: I0214 04:54:36.420467 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zjlz\" (UniqueName: \"kubernetes.io/projected/53d6fbce-336b-46b4-85fe-b03c0b7d9339-kube-api-access-8zjlz\") pod \"community-operators-52dzz\" (UID: \"53d6fbce-336b-46b4-85fe-b03c0b7d9339\") " pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:36 crc kubenswrapper[4867]: I0214 04:54:36.421201 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53d6fbce-336b-46b4-85fe-b03c0b7d9339-catalog-content\") pod \"community-operators-52dzz\" (UID: \"53d6fbce-336b-46b4-85fe-b03c0b7d9339\") " pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:36 crc kubenswrapper[4867]: I0214 04:54:36.421484 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53d6fbce-336b-46b4-85fe-b03c0b7d9339-utilities\") pod \"community-operators-52dzz\" (UID: \"53d6fbce-336b-46b4-85fe-b03c0b7d9339\") " pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:36 crc kubenswrapper[4867]: I0214 04:54:36.524209 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53d6fbce-336b-46b4-85fe-b03c0b7d9339-catalog-content\") pod \"community-operators-52dzz\" (UID: \"53d6fbce-336b-46b4-85fe-b03c0b7d9339\") " pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:36 crc kubenswrapper[4867]: I0214 04:54:36.524332 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53d6fbce-336b-46b4-85fe-b03c0b7d9339-utilities\") pod \"community-operators-52dzz\" (UID: \"53d6fbce-336b-46b4-85fe-b03c0b7d9339\") " pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:36 crc kubenswrapper[4867]: I0214 04:54:36.524915 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zjlz\" (UniqueName: \"kubernetes.io/projected/53d6fbce-336b-46b4-85fe-b03c0b7d9339-kube-api-access-8zjlz\") pod \"community-operators-52dzz\" (UID: \"53d6fbce-336b-46b4-85fe-b03c0b7d9339\") " pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:36 crc kubenswrapper[4867]: I0214 04:54:36.525000 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53d6fbce-336b-46b4-85fe-b03c0b7d9339-utilities\") pod \"community-operators-52dzz\" (UID: \"53d6fbce-336b-46b4-85fe-b03c0b7d9339\") " pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:36 crc kubenswrapper[4867]: I0214 04:54:36.525243 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53d6fbce-336b-46b4-85fe-b03c0b7d9339-catalog-content\") pod \"community-operators-52dzz\" (UID: \"53d6fbce-336b-46b4-85fe-b03c0b7d9339\") " pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:36 crc kubenswrapper[4867]: I0214 04:54:36.546424 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zjlz\" (UniqueName: \"kubernetes.io/projected/53d6fbce-336b-46b4-85fe-b03c0b7d9339-kube-api-access-8zjlz\") pod \"community-operators-52dzz\" (UID: \"53d6fbce-336b-46b4-85fe-b03c0b7d9339\") " pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:36 crc kubenswrapper[4867]: I0214 04:54:36.655938 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:37 crc kubenswrapper[4867]: I0214 04:54:37.256586 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-52dzz"] Feb 14 04:54:37 crc kubenswrapper[4867]: I0214 04:54:37.610449 4867 generic.go:334] "Generic (PLEG): container finished" podID="53d6fbce-336b-46b4-85fe-b03c0b7d9339" containerID="3a0158ad58ab99473d0a6771f7b81a8c4f2c53aff6439d0f5cd5ebe48d657a89" exitCode=0 Feb 14 04:54:37 crc kubenswrapper[4867]: I0214 04:54:37.610543 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52dzz" event={"ID":"53d6fbce-336b-46b4-85fe-b03c0b7d9339","Type":"ContainerDied","Data":"3a0158ad58ab99473d0a6771f7b81a8c4f2c53aff6439d0f5cd5ebe48d657a89"} Feb 14 04:54:37 crc kubenswrapper[4867]: I0214 04:54:37.610949 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52dzz" event={"ID":"53d6fbce-336b-46b4-85fe-b03c0b7d9339","Type":"ContainerStarted","Data":"26d6ff1e05b77e17e7dadab06eeb78e805a361ff6edbdf729681eeed3227639f"} Feb 14 04:54:38 crc kubenswrapper[4867]: I0214 04:54:38.623912 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52dzz" event={"ID":"53d6fbce-336b-46b4-85fe-b03c0b7d9339","Type":"ContainerStarted","Data":"7601bce435eea6ccd54cad135f396d925fa3553a98738d45506dc83c1f60bcfc"} Feb 14 04:54:40 crc kubenswrapper[4867]: I0214 04:54:40.646062 4867 generic.go:334] "Generic (PLEG): container finished" podID="53d6fbce-336b-46b4-85fe-b03c0b7d9339" containerID="7601bce435eea6ccd54cad135f396d925fa3553a98738d45506dc83c1f60bcfc" exitCode=0 Feb 14 04:54:40 crc kubenswrapper[4867]: I0214 04:54:40.646141 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52dzz" event={"ID":"53d6fbce-336b-46b4-85fe-b03c0b7d9339","Type":"ContainerDied","Data":"7601bce435eea6ccd54cad135f396d925fa3553a98738d45506dc83c1f60bcfc"} Feb 14 04:54:41 crc kubenswrapper[4867]: I0214 04:54:41.673011 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52dzz" event={"ID":"53d6fbce-336b-46b4-85fe-b03c0b7d9339","Type":"ContainerStarted","Data":"09625455fde410a4535cb3c133cf3c021a93293ab1b62943f8d8ab93001e22a1"} Feb 14 04:54:41 crc kubenswrapper[4867]: I0214 04:54:41.706425 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-52dzz" podStartSLOduration=2.264111703 podStartE2EDuration="5.706401399s" podCreationTimestamp="2026-02-14 04:54:36 +0000 UTC" firstStartedPulling="2026-02-14 04:54:37.613184892 +0000 UTC m=+2709.694122206" lastFinishedPulling="2026-02-14 04:54:41.055474588 +0000 UTC m=+2713.136411902" observedRunningTime="2026-02-14 04:54:41.694375093 +0000 UTC m=+2713.775312417" watchObservedRunningTime="2026-02-14 04:54:41.706401399 +0000 UTC m=+2713.787338713" Feb 14 04:54:46 crc kubenswrapper[4867]: I0214 04:54:46.656288 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:46 crc kubenswrapper[4867]: I0214 04:54:46.656849 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:46 crc kubenswrapper[4867]: I0214 04:54:46.711318 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:46 crc kubenswrapper[4867]: I0214 04:54:46.778710 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:46 crc kubenswrapper[4867]: I0214 04:54:46.953717 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-52dzz"] Feb 14 04:54:48 crc kubenswrapper[4867]: I0214 04:54:48.753610 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-52dzz" podUID="53d6fbce-336b-46b4-85fe-b03c0b7d9339" containerName="registry-server" containerID="cri-o://09625455fde410a4535cb3c133cf3c021a93293ab1b62943f8d8ab93001e22a1" gracePeriod=2 Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.303174 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.432627 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zjlz\" (UniqueName: \"kubernetes.io/projected/53d6fbce-336b-46b4-85fe-b03c0b7d9339-kube-api-access-8zjlz\") pod \"53d6fbce-336b-46b4-85fe-b03c0b7d9339\" (UID: \"53d6fbce-336b-46b4-85fe-b03c0b7d9339\") " Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.432840 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53d6fbce-336b-46b4-85fe-b03c0b7d9339-catalog-content\") pod \"53d6fbce-336b-46b4-85fe-b03c0b7d9339\" (UID: \"53d6fbce-336b-46b4-85fe-b03c0b7d9339\") " Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.432885 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53d6fbce-336b-46b4-85fe-b03c0b7d9339-utilities\") pod \"53d6fbce-336b-46b4-85fe-b03c0b7d9339\" (UID: \"53d6fbce-336b-46b4-85fe-b03c0b7d9339\") " Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.433544 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53d6fbce-336b-46b4-85fe-b03c0b7d9339-utilities" (OuterVolumeSpecName: "utilities") pod "53d6fbce-336b-46b4-85fe-b03c0b7d9339" (UID: "53d6fbce-336b-46b4-85fe-b03c0b7d9339"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.434622 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53d6fbce-336b-46b4-85fe-b03c0b7d9339-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.439450 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53d6fbce-336b-46b4-85fe-b03c0b7d9339-kube-api-access-8zjlz" (OuterVolumeSpecName: "kube-api-access-8zjlz") pod "53d6fbce-336b-46b4-85fe-b03c0b7d9339" (UID: "53d6fbce-336b-46b4-85fe-b03c0b7d9339"). InnerVolumeSpecName "kube-api-access-8zjlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.496160 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53d6fbce-336b-46b4-85fe-b03c0b7d9339-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53d6fbce-336b-46b4-85fe-b03c0b7d9339" (UID: "53d6fbce-336b-46b4-85fe-b03c0b7d9339"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.538189 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zjlz\" (UniqueName: \"kubernetes.io/projected/53d6fbce-336b-46b4-85fe-b03c0b7d9339-kube-api-access-8zjlz\") on node \"crc\" DevicePath \"\"" Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.538230 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53d6fbce-336b-46b4-85fe-b03c0b7d9339-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.766132 4867 generic.go:334] "Generic (PLEG): container finished" podID="53d6fbce-336b-46b4-85fe-b03c0b7d9339" containerID="09625455fde410a4535cb3c133cf3c021a93293ab1b62943f8d8ab93001e22a1" exitCode=0 Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.766201 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52dzz" Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.766238 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52dzz" event={"ID":"53d6fbce-336b-46b4-85fe-b03c0b7d9339","Type":"ContainerDied","Data":"09625455fde410a4535cb3c133cf3c021a93293ab1b62943f8d8ab93001e22a1"} Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.766746 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52dzz" event={"ID":"53d6fbce-336b-46b4-85fe-b03c0b7d9339","Type":"ContainerDied","Data":"26d6ff1e05b77e17e7dadab06eeb78e805a361ff6edbdf729681eeed3227639f"} Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.766788 4867 scope.go:117] "RemoveContainer" containerID="09625455fde410a4535cb3c133cf3c021a93293ab1b62943f8d8ab93001e22a1" Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.803806 4867 scope.go:117] "RemoveContainer" containerID="7601bce435eea6ccd54cad135f396d925fa3553a98738d45506dc83c1f60bcfc" Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.830142 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-52dzz"] Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.838330 4867 scope.go:117] "RemoveContainer" containerID="3a0158ad58ab99473d0a6771f7b81a8c4f2c53aff6439d0f5cd5ebe48d657a89" Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.854059 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-52dzz"] Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.926427 4867 scope.go:117] "RemoveContainer" containerID="09625455fde410a4535cb3c133cf3c021a93293ab1b62943f8d8ab93001e22a1" Feb 14 04:54:49 crc kubenswrapper[4867]: E0214 04:54:49.927089 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09625455fde410a4535cb3c133cf3c021a93293ab1b62943f8d8ab93001e22a1\": container with ID starting with 09625455fde410a4535cb3c133cf3c021a93293ab1b62943f8d8ab93001e22a1 not found: ID does not exist" containerID="09625455fde410a4535cb3c133cf3c021a93293ab1b62943f8d8ab93001e22a1" Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.927174 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09625455fde410a4535cb3c133cf3c021a93293ab1b62943f8d8ab93001e22a1"} err="failed to get container status \"09625455fde410a4535cb3c133cf3c021a93293ab1b62943f8d8ab93001e22a1\": rpc error: code = NotFound desc = could not find container \"09625455fde410a4535cb3c133cf3c021a93293ab1b62943f8d8ab93001e22a1\": container with ID starting with 09625455fde410a4535cb3c133cf3c021a93293ab1b62943f8d8ab93001e22a1 not found: ID does not exist" Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.927217 4867 scope.go:117] "RemoveContainer" containerID="7601bce435eea6ccd54cad135f396d925fa3553a98738d45506dc83c1f60bcfc" Feb 14 04:54:49 crc kubenswrapper[4867]: E0214 04:54:49.927912 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7601bce435eea6ccd54cad135f396d925fa3553a98738d45506dc83c1f60bcfc\": container with ID starting with 7601bce435eea6ccd54cad135f396d925fa3553a98738d45506dc83c1f60bcfc not found: ID does not exist" containerID="7601bce435eea6ccd54cad135f396d925fa3553a98738d45506dc83c1f60bcfc" Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.927947 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7601bce435eea6ccd54cad135f396d925fa3553a98738d45506dc83c1f60bcfc"} err="failed to get container status \"7601bce435eea6ccd54cad135f396d925fa3553a98738d45506dc83c1f60bcfc\": rpc error: code = NotFound desc = could not find container \"7601bce435eea6ccd54cad135f396d925fa3553a98738d45506dc83c1f60bcfc\": container with ID starting with 7601bce435eea6ccd54cad135f396d925fa3553a98738d45506dc83c1f60bcfc not found: ID does not exist" Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.927970 4867 scope.go:117] "RemoveContainer" containerID="3a0158ad58ab99473d0a6771f7b81a8c4f2c53aff6439d0f5cd5ebe48d657a89" Feb 14 04:54:49 crc kubenswrapper[4867]: E0214 04:54:49.928256 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a0158ad58ab99473d0a6771f7b81a8c4f2c53aff6439d0f5cd5ebe48d657a89\": container with ID starting with 3a0158ad58ab99473d0a6771f7b81a8c4f2c53aff6439d0f5cd5ebe48d657a89 not found: ID does not exist" containerID="3a0158ad58ab99473d0a6771f7b81a8c4f2c53aff6439d0f5cd5ebe48d657a89" Feb 14 04:54:49 crc kubenswrapper[4867]: I0214 04:54:49.928300 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a0158ad58ab99473d0a6771f7b81a8c4f2c53aff6439d0f5cd5ebe48d657a89"} err="failed to get container status \"3a0158ad58ab99473d0a6771f7b81a8c4f2c53aff6439d0f5cd5ebe48d657a89\": rpc error: code = NotFound desc = could not find container \"3a0158ad58ab99473d0a6771f7b81a8c4f2c53aff6439d0f5cd5ebe48d657a89\": container with ID starting with 3a0158ad58ab99473d0a6771f7b81a8c4f2c53aff6439d0f5cd5ebe48d657a89 not found: ID does not exist" Feb 14 04:54:51 crc kubenswrapper[4867]: I0214 04:54:51.019722 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53d6fbce-336b-46b4-85fe-b03c0b7d9339" path="/var/lib/kubelet/pods/53d6fbce-336b-46b4-85fe-b03c0b7d9339/volumes" Feb 14 04:55:01 crc kubenswrapper[4867]: I0214 04:55:01.251378 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:55:01 crc kubenswrapper[4867]: I0214 04:55:01.252057 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:55:20 crc kubenswrapper[4867]: I0214 04:55:20.703021 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2r6zz"] Feb 14 04:55:20 crc kubenswrapper[4867]: E0214 04:55:20.704187 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53d6fbce-336b-46b4-85fe-b03c0b7d9339" containerName="registry-server" Feb 14 04:55:20 crc kubenswrapper[4867]: I0214 04:55:20.704207 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="53d6fbce-336b-46b4-85fe-b03c0b7d9339" containerName="registry-server" Feb 14 04:55:20 crc kubenswrapper[4867]: E0214 04:55:20.704223 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53d6fbce-336b-46b4-85fe-b03c0b7d9339" containerName="extract-utilities" Feb 14 04:55:20 crc kubenswrapper[4867]: I0214 04:55:20.704231 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="53d6fbce-336b-46b4-85fe-b03c0b7d9339" containerName="extract-utilities" Feb 14 04:55:20 crc kubenswrapper[4867]: E0214 04:55:20.704281 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53d6fbce-336b-46b4-85fe-b03c0b7d9339" containerName="extract-content" Feb 14 04:55:20 crc kubenswrapper[4867]: I0214 04:55:20.704288 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="53d6fbce-336b-46b4-85fe-b03c0b7d9339" containerName="extract-content" Feb 14 04:55:20 crc kubenswrapper[4867]: I0214 04:55:20.704583 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="53d6fbce-336b-46b4-85fe-b03c0b7d9339" containerName="registry-server" Feb 14 04:55:20 crc kubenswrapper[4867]: I0214 04:55:20.752408 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2r6zz"] Feb 14 04:55:20 crc kubenswrapper[4867]: I0214 04:55:20.752664 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:20 crc kubenswrapper[4867]: I0214 04:55:20.879275 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-catalog-content\") pod \"redhat-marketplace-2r6zz\" (UID: \"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e\") " pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:20 crc kubenswrapper[4867]: I0214 04:55:20.879676 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-utilities\") pod \"redhat-marketplace-2r6zz\" (UID: \"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e\") " pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:20 crc kubenswrapper[4867]: I0214 04:55:20.879766 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8tkv\" (UniqueName: \"kubernetes.io/projected/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-kube-api-access-k8tkv\") pod \"redhat-marketplace-2r6zz\" (UID: \"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e\") " pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:20 crc kubenswrapper[4867]: I0214 04:55:20.982646 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-utilities\") pod \"redhat-marketplace-2r6zz\" (UID: \"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e\") " pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:20 crc kubenswrapper[4867]: I0214 04:55:20.982757 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8tkv\" (UniqueName: \"kubernetes.io/projected/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-kube-api-access-k8tkv\") pod \"redhat-marketplace-2r6zz\" (UID: \"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e\") " pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:20 crc kubenswrapper[4867]: I0214 04:55:20.982879 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-catalog-content\") pod \"redhat-marketplace-2r6zz\" (UID: \"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e\") " pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:20 crc kubenswrapper[4867]: I0214 04:55:20.983157 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-utilities\") pod \"redhat-marketplace-2r6zz\" (UID: \"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e\") " pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:20 crc kubenswrapper[4867]: I0214 04:55:20.983257 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-catalog-content\") pod \"redhat-marketplace-2r6zz\" (UID: \"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e\") " pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:21 crc kubenswrapper[4867]: I0214 04:55:21.008973 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8tkv\" (UniqueName: \"kubernetes.io/projected/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-kube-api-access-k8tkv\") pod \"redhat-marketplace-2r6zz\" (UID: \"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e\") " pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:21 crc kubenswrapper[4867]: I0214 04:55:21.126797 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:21 crc kubenswrapper[4867]: I0214 04:55:21.654779 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2r6zz"] Feb 14 04:55:22 crc kubenswrapper[4867]: I0214 04:55:22.157075 4867 generic.go:334] "Generic (PLEG): container finished" podID="6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e" containerID="864cea9c6fd51a05a021fd70f34da6d876138831664ba7f7b5515cfa137ca162" exitCode=0 Feb 14 04:55:22 crc kubenswrapper[4867]: I0214 04:55:22.157117 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2r6zz" event={"ID":"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e","Type":"ContainerDied","Data":"864cea9c6fd51a05a021fd70f34da6d876138831664ba7f7b5515cfa137ca162"} Feb 14 04:55:22 crc kubenswrapper[4867]: I0214 04:55:22.157551 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2r6zz" event={"ID":"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e","Type":"ContainerStarted","Data":"801c2a49873ba7dc052c0cafff2d252c8c67d675c1ccdad781acd1f9ae903e7b"} Feb 14 04:55:23 crc kubenswrapper[4867]: I0214 04:55:23.179170 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2r6zz" event={"ID":"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e","Type":"ContainerStarted","Data":"76c330e82722774886ebf7ff260e7aa7cd9756e5216bc8267edf5c81673342eb"} Feb 14 04:55:24 crc kubenswrapper[4867]: I0214 04:55:24.192845 4867 generic.go:334] "Generic (PLEG): container finished" podID="6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e" containerID="76c330e82722774886ebf7ff260e7aa7cd9756e5216bc8267edf5c81673342eb" exitCode=0 Feb 14 04:55:24 crc kubenswrapper[4867]: I0214 04:55:24.193043 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2r6zz" event={"ID":"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e","Type":"ContainerDied","Data":"76c330e82722774886ebf7ff260e7aa7cd9756e5216bc8267edf5c81673342eb"} Feb 14 04:55:25 crc kubenswrapper[4867]: I0214 04:55:25.210634 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2r6zz" event={"ID":"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e","Type":"ContainerStarted","Data":"8d21984e496d82e59be4bd5aa1d091470381a66d27d163c606b9657fb5273f2a"} Feb 14 04:55:25 crc kubenswrapper[4867]: I0214 04:55:25.245047 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2r6zz" podStartSLOduration=2.770071637 podStartE2EDuration="5.245017588s" podCreationTimestamp="2026-02-14 04:55:20 +0000 UTC" firstStartedPulling="2026-02-14 04:55:22.163166483 +0000 UTC m=+2754.244103837" lastFinishedPulling="2026-02-14 04:55:24.638112474 +0000 UTC m=+2756.719049788" observedRunningTime="2026-02-14 04:55:25.230184058 +0000 UTC m=+2757.311121372" watchObservedRunningTime="2026-02-14 04:55:25.245017588 +0000 UTC m=+2757.325954902" Feb 14 04:55:31 crc kubenswrapper[4867]: I0214 04:55:31.127781 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:31 crc kubenswrapper[4867]: I0214 04:55:31.128405 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:31 crc kubenswrapper[4867]: I0214 04:55:31.190978 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:31 crc kubenswrapper[4867]: I0214 04:55:31.251467 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:55:31 crc kubenswrapper[4867]: I0214 04:55:31.254837 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:55:31 crc kubenswrapper[4867]: I0214 04:55:31.348880 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:31 crc kubenswrapper[4867]: I0214 04:55:31.444963 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2r6zz"] Feb 14 04:55:33 crc kubenswrapper[4867]: I0214 04:55:33.305761 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2r6zz" podUID="6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e" containerName="registry-server" containerID="cri-o://8d21984e496d82e59be4bd5aa1d091470381a66d27d163c606b9657fb5273f2a" gracePeriod=2 Feb 14 04:55:33 crc kubenswrapper[4867]: I0214 04:55:33.921296 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.080967 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8tkv\" (UniqueName: \"kubernetes.io/projected/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-kube-api-access-k8tkv\") pod \"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e\" (UID: \"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e\") " Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.081092 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-catalog-content\") pod \"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e\" (UID: \"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e\") " Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.081413 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-utilities\") pod \"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e\" (UID: \"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e\") " Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.082369 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-utilities" (OuterVolumeSpecName: "utilities") pod "6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e" (UID: "6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.083583 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.094924 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-kube-api-access-k8tkv" (OuterVolumeSpecName: "kube-api-access-k8tkv") pod "6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e" (UID: "6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e"). InnerVolumeSpecName "kube-api-access-k8tkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.106400 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e" (UID: "6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.186750 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8tkv\" (UniqueName: \"kubernetes.io/projected/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-kube-api-access-k8tkv\") on node \"crc\" DevicePath \"\"" Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.186794 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.322336 4867 generic.go:334] "Generic (PLEG): container finished" podID="6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e" containerID="8d21984e496d82e59be4bd5aa1d091470381a66d27d163c606b9657fb5273f2a" exitCode=0 Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.322458 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2r6zz" event={"ID":"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e","Type":"ContainerDied","Data":"8d21984e496d82e59be4bd5aa1d091470381a66d27d163c606b9657fb5273f2a"} Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.322554 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2r6zz" event={"ID":"6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e","Type":"ContainerDied","Data":"801c2a49873ba7dc052c0cafff2d252c8c67d675c1ccdad781acd1f9ae903e7b"} Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.322493 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2r6zz" Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.322582 4867 scope.go:117] "RemoveContainer" containerID="8d21984e496d82e59be4bd5aa1d091470381a66d27d163c606b9657fb5273f2a" Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.358669 4867 scope.go:117] "RemoveContainer" containerID="76c330e82722774886ebf7ff260e7aa7cd9756e5216bc8267edf5c81673342eb" Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.381873 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2r6zz"] Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.392687 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2r6zz"] Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.411044 4867 scope.go:117] "RemoveContainer" containerID="864cea9c6fd51a05a021fd70f34da6d876138831664ba7f7b5515cfa137ca162" Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.469957 4867 scope.go:117] "RemoveContainer" containerID="8d21984e496d82e59be4bd5aa1d091470381a66d27d163c606b9657fb5273f2a" Feb 14 04:55:34 crc kubenswrapper[4867]: E0214 04:55:34.470650 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d21984e496d82e59be4bd5aa1d091470381a66d27d163c606b9657fb5273f2a\": container with ID starting with 8d21984e496d82e59be4bd5aa1d091470381a66d27d163c606b9657fb5273f2a not found: ID does not exist" containerID="8d21984e496d82e59be4bd5aa1d091470381a66d27d163c606b9657fb5273f2a" Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.470742 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d21984e496d82e59be4bd5aa1d091470381a66d27d163c606b9657fb5273f2a"} err="failed to get container status \"8d21984e496d82e59be4bd5aa1d091470381a66d27d163c606b9657fb5273f2a\": rpc error: code = NotFound desc = could not find container \"8d21984e496d82e59be4bd5aa1d091470381a66d27d163c606b9657fb5273f2a\": container with ID starting with 8d21984e496d82e59be4bd5aa1d091470381a66d27d163c606b9657fb5273f2a not found: ID does not exist" Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.470821 4867 scope.go:117] "RemoveContainer" containerID="76c330e82722774886ebf7ff260e7aa7cd9756e5216bc8267edf5c81673342eb" Feb 14 04:55:34 crc kubenswrapper[4867]: E0214 04:55:34.471271 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76c330e82722774886ebf7ff260e7aa7cd9756e5216bc8267edf5c81673342eb\": container with ID starting with 76c330e82722774886ebf7ff260e7aa7cd9756e5216bc8267edf5c81673342eb not found: ID does not exist" containerID="76c330e82722774886ebf7ff260e7aa7cd9756e5216bc8267edf5c81673342eb" Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.471744 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76c330e82722774886ebf7ff260e7aa7cd9756e5216bc8267edf5c81673342eb"} err="failed to get container status \"76c330e82722774886ebf7ff260e7aa7cd9756e5216bc8267edf5c81673342eb\": rpc error: code = NotFound desc = could not find container \"76c330e82722774886ebf7ff260e7aa7cd9756e5216bc8267edf5c81673342eb\": container with ID starting with 76c330e82722774886ebf7ff260e7aa7cd9756e5216bc8267edf5c81673342eb not found: ID does not exist" Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.471831 4867 scope.go:117] "RemoveContainer" containerID="864cea9c6fd51a05a021fd70f34da6d876138831664ba7f7b5515cfa137ca162" Feb 14 04:55:34 crc kubenswrapper[4867]: E0214 04:55:34.472266 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"864cea9c6fd51a05a021fd70f34da6d876138831664ba7f7b5515cfa137ca162\": container with ID starting with 864cea9c6fd51a05a021fd70f34da6d876138831664ba7f7b5515cfa137ca162 not found: ID does not exist" containerID="864cea9c6fd51a05a021fd70f34da6d876138831664ba7f7b5515cfa137ca162" Feb 14 04:55:34 crc kubenswrapper[4867]: I0214 04:55:34.472357 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"864cea9c6fd51a05a021fd70f34da6d876138831664ba7f7b5515cfa137ca162"} err="failed to get container status \"864cea9c6fd51a05a021fd70f34da6d876138831664ba7f7b5515cfa137ca162\": rpc error: code = NotFound desc = could not find container \"864cea9c6fd51a05a021fd70f34da6d876138831664ba7f7b5515cfa137ca162\": container with ID starting with 864cea9c6fd51a05a021fd70f34da6d876138831664ba7f7b5515cfa137ca162 not found: ID does not exist" Feb 14 04:55:35 crc kubenswrapper[4867]: I0214 04:55:35.011527 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e" path="/var/lib/kubelet/pods/6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e/volumes" Feb 14 04:55:43 crc kubenswrapper[4867]: I0214 04:55:43.459939 4867 generic.go:334] "Generic (PLEG): container finished" podID="8c3553e4-9d3b-4c1d-bbc3-35371d733c86" containerID="35ba4629751c3d1c99df22ad826fbdecb0b6da7011373c7fcf15710f10455091" exitCode=0 Feb 14 04:55:43 crc kubenswrapper[4867]: I0214 04:55:43.460805 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" event={"ID":"8c3553e4-9d3b-4c1d-bbc3-35371d733c86","Type":"ContainerDied","Data":"35ba4629751c3d1c99df22ad826fbdecb0b6da7011373c7fcf15710f10455091"} Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.072393 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.204636 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-inventory\") pod \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.204763 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-0\") pod \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.204807 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-2\") pod \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.204932 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-3\") pod \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.204970 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-extra-config-0\") pod \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.205011 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-migration-ssh-key-0\") pod \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.205063 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-ssh-key-openstack-edpm-ipam\") pod \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.205115 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9ccz\" (UniqueName: \"kubernetes.io/projected/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-kube-api-access-p9ccz\") pod \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.205187 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-combined-ca-bundle\") pod \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.205342 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-1\") pod \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.205455 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-migration-ssh-key-1\") pod \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\" (UID: \"8c3553e4-9d3b-4c1d-bbc3-35371d733c86\") " Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.212991 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-kube-api-access-p9ccz" (OuterVolumeSpecName: "kube-api-access-p9ccz") pod "8c3553e4-9d3b-4c1d-bbc3-35371d733c86" (UID: "8c3553e4-9d3b-4c1d-bbc3-35371d733c86"). InnerVolumeSpecName "kube-api-access-p9ccz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.213738 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "8c3553e4-9d3b-4c1d-bbc3-35371d733c86" (UID: "8c3553e4-9d3b-4c1d-bbc3-35371d733c86"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.239345 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "8c3553e4-9d3b-4c1d-bbc3-35371d733c86" (UID: "8c3553e4-9d3b-4c1d-bbc3-35371d733c86"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.245602 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "8c3553e4-9d3b-4c1d-bbc3-35371d733c86" (UID: "8c3553e4-9d3b-4c1d-bbc3-35371d733c86"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.252273 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "8c3553e4-9d3b-4c1d-bbc3-35371d733c86" (UID: "8c3553e4-9d3b-4c1d-bbc3-35371d733c86"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.252739 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8c3553e4-9d3b-4c1d-bbc3-35371d733c86" (UID: "8c3553e4-9d3b-4c1d-bbc3-35371d733c86"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.262533 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "8c3553e4-9d3b-4c1d-bbc3-35371d733c86" (UID: "8c3553e4-9d3b-4c1d-bbc3-35371d733c86"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.267861 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "8c3553e4-9d3b-4c1d-bbc3-35371d733c86" (UID: "8c3553e4-9d3b-4c1d-bbc3-35371d733c86"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.270235 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "8c3553e4-9d3b-4c1d-bbc3-35371d733c86" (UID: "8c3553e4-9d3b-4c1d-bbc3-35371d733c86"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.270603 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "8c3553e4-9d3b-4c1d-bbc3-35371d733c86" (UID: "8c3553e4-9d3b-4c1d-bbc3-35371d733c86"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.273766 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-inventory" (OuterVolumeSpecName: "inventory") pod "8c3553e4-9d3b-4c1d-bbc3-35371d733c86" (UID: "8c3553e4-9d3b-4c1d-bbc3-35371d733c86"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.309075 4867 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.309113 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.309123 4867 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.309133 4867 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.309142 4867 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.309152 4867 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.309161 4867 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.309170 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.309178 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9ccz\" (UniqueName: \"kubernetes.io/projected/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-kube-api-access-p9ccz\") on node \"crc\" DevicePath \"\"" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.309186 4867 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.309197 4867 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/8c3553e4-9d3b-4c1d-bbc3-35371d733c86-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.487316 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" event={"ID":"8c3553e4-9d3b-4c1d-bbc3-35371d733c86","Type":"ContainerDied","Data":"22ea790dd323fc348f6fd0cafee4bad57f394f8293bdc77fe1ca0af9b1394a35"} Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.487377 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22ea790dd323fc348f6fd0cafee4bad57f394f8293bdc77fe1ca0af9b1394a35" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.487708 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-s5lc4" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.622796 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq"] Feb 14 04:55:45 crc kubenswrapper[4867]: E0214 04:55:45.623607 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c3553e4-9d3b-4c1d-bbc3-35371d733c86" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.623628 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c3553e4-9d3b-4c1d-bbc3-35371d733c86" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 14 04:55:45 crc kubenswrapper[4867]: E0214 04:55:45.623653 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e" containerName="extract-content" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.623661 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e" containerName="extract-content" Feb 14 04:55:45 crc kubenswrapper[4867]: E0214 04:55:45.623685 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e" containerName="extract-utilities" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.623693 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e" containerName="extract-utilities" Feb 14 04:55:45 crc kubenswrapper[4867]: E0214 04:55:45.623733 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e" containerName="registry-server" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.623741 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e" containerName="registry-server" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.624114 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c3553e4-9d3b-4c1d-bbc3-35371d733c86" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.624135 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6248f22c-a2aa-4bd5-9d4d-6eab37a9ce0e" containerName="registry-server" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.625611 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.629440 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.629497 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.629829 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.630000 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.635839 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.639071 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq"] Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.720373 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.720473 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.720498 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.720594 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.720655 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.720693 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zhlx\" (UniqueName: \"kubernetes.io/projected/b70721c5-f29f-4cc4-8ee7-88341a81765d-kube-api-access-5zhlx\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.720762 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.822608 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.823296 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.823327 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.823348 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.823417 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.823441 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.823473 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zhlx\" (UniqueName: \"kubernetes.io/projected/b70721c5-f29f-4cc4-8ee7-88341a81765d-kube-api-access-5zhlx\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.830238 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.830789 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.831068 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.831768 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.832222 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.832592 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.845957 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zhlx\" (UniqueName: \"kubernetes.io/projected/b70721c5-f29f-4cc4-8ee7-88341a81765d-kube-api-access-5zhlx\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:45 crc kubenswrapper[4867]: I0214 04:55:45.950090 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:55:46 crc kubenswrapper[4867]: I0214 04:55:46.689033 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq"] Feb 14 04:55:47 crc kubenswrapper[4867]: I0214 04:55:47.525253 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" event={"ID":"b70721c5-f29f-4cc4-8ee7-88341a81765d","Type":"ContainerStarted","Data":"1537e8bfe998fee74f949f5917923a54ff718a7829d5e8a62f41549a3acc0bf4"} Feb 14 04:55:48 crc kubenswrapper[4867]: I0214 04:55:48.539850 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" event={"ID":"b70721c5-f29f-4cc4-8ee7-88341a81765d","Type":"ContainerStarted","Data":"fb47d4a4c558dace70949450fb42adb65e005b406785bd04b7e7c0bb95c122a8"} Feb 14 04:56:01 crc kubenswrapper[4867]: I0214 04:56:01.251420 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:56:01 crc kubenswrapper[4867]: I0214 04:56:01.252036 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:56:01 crc kubenswrapper[4867]: I0214 04:56:01.252096 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:56:01 crc kubenswrapper[4867]: I0214 04:56:01.253135 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e1b89ddb8a2754137d33a14676d4e33653c306a715ebb64010e116482bf849b7"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 04:56:01 crc kubenswrapper[4867]: I0214 04:56:01.253188 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://e1b89ddb8a2754137d33a14676d4e33653c306a715ebb64010e116482bf849b7" gracePeriod=600 Feb 14 04:56:01 crc kubenswrapper[4867]: I0214 04:56:01.703815 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="e1b89ddb8a2754137d33a14676d4e33653c306a715ebb64010e116482bf849b7" exitCode=0 Feb 14 04:56:01 crc kubenswrapper[4867]: I0214 04:56:01.703893 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"e1b89ddb8a2754137d33a14676d4e33653c306a715ebb64010e116482bf849b7"} Feb 14 04:56:01 crc kubenswrapper[4867]: I0214 04:56:01.704212 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d"} Feb 14 04:56:01 crc kubenswrapper[4867]: I0214 04:56:01.704247 4867 scope.go:117] "RemoveContainer" containerID="2e46dcab63865af965f1ceab9775684d2c284c2072e738aed0acdc7b372802d2" Feb 14 04:56:01 crc kubenswrapper[4867]: I0214 04:56:01.733017 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" podStartSLOduration=16.059550371 podStartE2EDuration="16.732993094s" podCreationTimestamp="2026-02-14 04:55:45 +0000 UTC" firstStartedPulling="2026-02-14 04:55:46.693450578 +0000 UTC m=+2778.774387902" lastFinishedPulling="2026-02-14 04:55:47.366893311 +0000 UTC m=+2779.447830625" observedRunningTime="2026-02-14 04:55:48.563269471 +0000 UTC m=+2780.644206785" watchObservedRunningTime="2026-02-14 04:56:01.732993094 +0000 UTC m=+2793.813930408" Feb 14 04:56:19 crc kubenswrapper[4867]: I0214 04:56:19.983787 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5zhn6"] Feb 14 04:56:19 crc kubenswrapper[4867]: I0214 04:56:19.987782 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:56:20 crc kubenswrapper[4867]: I0214 04:56:20.006972 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5zhn6"] Feb 14 04:56:20 crc kubenswrapper[4867]: I0214 04:56:20.098405 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlzbp\" (UniqueName: \"kubernetes.io/projected/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-kube-api-access-vlzbp\") pod \"redhat-operators-5zhn6\" (UID: \"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0\") " pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:56:20 crc kubenswrapper[4867]: I0214 04:56:20.098461 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-utilities\") pod \"redhat-operators-5zhn6\" (UID: \"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0\") " pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:56:20 crc kubenswrapper[4867]: I0214 04:56:20.098565 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-catalog-content\") pod \"redhat-operators-5zhn6\" (UID: \"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0\") " pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:56:20 crc kubenswrapper[4867]: I0214 04:56:20.201382 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlzbp\" (UniqueName: \"kubernetes.io/projected/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-kube-api-access-vlzbp\") pod \"redhat-operators-5zhn6\" (UID: \"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0\") " pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:56:20 crc kubenswrapper[4867]: I0214 04:56:20.201464 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-utilities\") pod \"redhat-operators-5zhn6\" (UID: \"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0\") " pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:56:20 crc kubenswrapper[4867]: I0214 04:56:20.201598 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-catalog-content\") pod \"redhat-operators-5zhn6\" (UID: \"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0\") " pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:56:20 crc kubenswrapper[4867]: I0214 04:56:20.202146 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-utilities\") pod \"redhat-operators-5zhn6\" (UID: \"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0\") " pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:56:20 crc kubenswrapper[4867]: I0214 04:56:20.202211 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-catalog-content\") pod \"redhat-operators-5zhn6\" (UID: \"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0\") " pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:56:20 crc kubenswrapper[4867]: I0214 04:56:20.232449 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlzbp\" (UniqueName: \"kubernetes.io/projected/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-kube-api-access-vlzbp\") pod \"redhat-operators-5zhn6\" (UID: \"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0\") " pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:56:20 crc kubenswrapper[4867]: I0214 04:56:20.327615 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:56:20 crc kubenswrapper[4867]: I0214 04:56:20.919652 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5zhn6"] Feb 14 04:56:21 crc kubenswrapper[4867]: I0214 04:56:21.975367 4867 generic.go:334] "Generic (PLEG): container finished" podID="7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" containerID="725585c3102ae70fa410a91152e5b75475823051c87b1cb7f8007f0a066df3e9" exitCode=0 Feb 14 04:56:21 crc kubenswrapper[4867]: I0214 04:56:21.975838 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zhn6" event={"ID":"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0","Type":"ContainerDied","Data":"725585c3102ae70fa410a91152e5b75475823051c87b1cb7f8007f0a066df3e9"} Feb 14 04:56:21 crc kubenswrapper[4867]: I0214 04:56:21.975872 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zhn6" event={"ID":"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0","Type":"ContainerStarted","Data":"56a1eb0b8c1466acdb90dcebf861602011eca4a1fc13f846a3780bf30b13d856"} Feb 14 04:56:24 crc kubenswrapper[4867]: I0214 04:56:24.013667 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zhn6" event={"ID":"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0","Type":"ContainerStarted","Data":"11dedad862d5970ba831e4baa8e2a52888ae530c6cd750bd2a3fd72654bd608b"} Feb 14 04:56:31 crc kubenswrapper[4867]: I0214 04:56:31.120543 4867 generic.go:334] "Generic (PLEG): container finished" podID="7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" containerID="11dedad862d5970ba831e4baa8e2a52888ae530c6cd750bd2a3fd72654bd608b" exitCode=0 Feb 14 04:56:31 crc kubenswrapper[4867]: I0214 04:56:31.120662 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zhn6" event={"ID":"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0","Type":"ContainerDied","Data":"11dedad862d5970ba831e4baa8e2a52888ae530c6cd750bd2a3fd72654bd608b"} Feb 14 04:56:33 crc kubenswrapper[4867]: I0214 04:56:33.144448 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zhn6" event={"ID":"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0","Type":"ContainerStarted","Data":"55bc23e5514e0a902ef30ceb2885c5568cc7b8adceac585adb80b612dd609517"} Feb 14 04:56:33 crc kubenswrapper[4867]: I0214 04:56:33.173117 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5zhn6" podStartSLOduration=4.089503238 podStartE2EDuration="14.173097343s" podCreationTimestamp="2026-02-14 04:56:19 +0000 UTC" firstStartedPulling="2026-02-14 04:56:21.979755752 +0000 UTC m=+2814.060693066" lastFinishedPulling="2026-02-14 04:56:32.063349857 +0000 UTC m=+2824.144287171" observedRunningTime="2026-02-14 04:56:33.163824089 +0000 UTC m=+2825.244761423" watchObservedRunningTime="2026-02-14 04:56:33.173097343 +0000 UTC m=+2825.254034657" Feb 14 04:56:40 crc kubenswrapper[4867]: I0214 04:56:40.327811 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:56:40 crc kubenswrapper[4867]: I0214 04:56:40.328727 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:56:41 crc kubenswrapper[4867]: I0214 04:56:41.416607 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5zhn6" podUID="7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" containerName="registry-server" probeResult="failure" output=< Feb 14 04:56:41 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 04:56:41 crc kubenswrapper[4867]: > Feb 14 04:56:51 crc kubenswrapper[4867]: I0214 04:56:51.374680 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5zhn6" podUID="7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" containerName="registry-server" probeResult="failure" output=< Feb 14 04:56:51 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 04:56:51 crc kubenswrapper[4867]: > Feb 14 04:57:00 crc kubenswrapper[4867]: I0214 04:57:00.403811 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:57:00 crc kubenswrapper[4867]: I0214 04:57:00.470828 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:57:00 crc kubenswrapper[4867]: I0214 04:57:00.666980 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5zhn6"] Feb 14 04:57:01 crc kubenswrapper[4867]: I0214 04:57:01.464199 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5zhn6" podUID="7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" containerName="registry-server" containerID="cri-o://55bc23e5514e0a902ef30ceb2885c5568cc7b8adceac585adb80b612dd609517" gracePeriod=2 Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.012388 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.195828 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-utilities\") pod \"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0\" (UID: \"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0\") " Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.196313 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlzbp\" (UniqueName: \"kubernetes.io/projected/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-kube-api-access-vlzbp\") pod \"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0\" (UID: \"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0\") " Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.196361 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-catalog-content\") pod \"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0\" (UID: \"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0\") " Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.196889 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-utilities" (OuterVolumeSpecName: "utilities") pod "7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" (UID: "7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.197063 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.209801 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-kube-api-access-vlzbp" (OuterVolumeSpecName: "kube-api-access-vlzbp") pod "7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" (UID: "7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0"). InnerVolumeSpecName "kube-api-access-vlzbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.299354 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlzbp\" (UniqueName: \"kubernetes.io/projected/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-kube-api-access-vlzbp\") on node \"crc\" DevicePath \"\"" Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.339590 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" (UID: "7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.401641 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.489123 4867 generic.go:334] "Generic (PLEG): container finished" podID="7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" containerID="55bc23e5514e0a902ef30ceb2885c5568cc7b8adceac585adb80b612dd609517" exitCode=0 Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.489196 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zhn6" event={"ID":"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0","Type":"ContainerDied","Data":"55bc23e5514e0a902ef30ceb2885c5568cc7b8adceac585adb80b612dd609517"} Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.489234 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zhn6" event={"ID":"7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0","Type":"ContainerDied","Data":"56a1eb0b8c1466acdb90dcebf861602011eca4a1fc13f846a3780bf30b13d856"} Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.489258 4867 scope.go:117] "RemoveContainer" containerID="55bc23e5514e0a902ef30ceb2885c5568cc7b8adceac585adb80b612dd609517" Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.491476 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5zhn6" Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.531191 4867 scope.go:117] "RemoveContainer" containerID="11dedad862d5970ba831e4baa8e2a52888ae530c6cd750bd2a3fd72654bd608b" Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.566625 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5zhn6"] Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.581331 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5zhn6"] Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.593483 4867 scope.go:117] "RemoveContainer" containerID="725585c3102ae70fa410a91152e5b75475823051c87b1cb7f8007f0a066df3e9" Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.622733 4867 scope.go:117] "RemoveContainer" containerID="55bc23e5514e0a902ef30ceb2885c5568cc7b8adceac585adb80b612dd609517" Feb 14 04:57:02 crc kubenswrapper[4867]: E0214 04:57:02.623385 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55bc23e5514e0a902ef30ceb2885c5568cc7b8adceac585adb80b612dd609517\": container with ID starting with 55bc23e5514e0a902ef30ceb2885c5568cc7b8adceac585adb80b612dd609517 not found: ID does not exist" containerID="55bc23e5514e0a902ef30ceb2885c5568cc7b8adceac585adb80b612dd609517" Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.623752 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55bc23e5514e0a902ef30ceb2885c5568cc7b8adceac585adb80b612dd609517"} err="failed to get container status \"55bc23e5514e0a902ef30ceb2885c5568cc7b8adceac585adb80b612dd609517\": rpc error: code = NotFound desc = could not find container \"55bc23e5514e0a902ef30ceb2885c5568cc7b8adceac585adb80b612dd609517\": container with ID starting with 55bc23e5514e0a902ef30ceb2885c5568cc7b8adceac585adb80b612dd609517 not found: ID does not exist" Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.623965 4867 scope.go:117] "RemoveContainer" containerID="11dedad862d5970ba831e4baa8e2a52888ae530c6cd750bd2a3fd72654bd608b" Feb 14 04:57:02 crc kubenswrapper[4867]: E0214 04:57:02.624639 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11dedad862d5970ba831e4baa8e2a52888ae530c6cd750bd2a3fd72654bd608b\": container with ID starting with 11dedad862d5970ba831e4baa8e2a52888ae530c6cd750bd2a3fd72654bd608b not found: ID does not exist" containerID="11dedad862d5970ba831e4baa8e2a52888ae530c6cd750bd2a3fd72654bd608b" Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.624698 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11dedad862d5970ba831e4baa8e2a52888ae530c6cd750bd2a3fd72654bd608b"} err="failed to get container status \"11dedad862d5970ba831e4baa8e2a52888ae530c6cd750bd2a3fd72654bd608b\": rpc error: code = NotFound desc = could not find container \"11dedad862d5970ba831e4baa8e2a52888ae530c6cd750bd2a3fd72654bd608b\": container with ID starting with 11dedad862d5970ba831e4baa8e2a52888ae530c6cd750bd2a3fd72654bd608b not found: ID does not exist" Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.624812 4867 scope.go:117] "RemoveContainer" containerID="725585c3102ae70fa410a91152e5b75475823051c87b1cb7f8007f0a066df3e9" Feb 14 04:57:02 crc kubenswrapper[4867]: E0214 04:57:02.626462 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"725585c3102ae70fa410a91152e5b75475823051c87b1cb7f8007f0a066df3e9\": container with ID starting with 725585c3102ae70fa410a91152e5b75475823051c87b1cb7f8007f0a066df3e9 not found: ID does not exist" containerID="725585c3102ae70fa410a91152e5b75475823051c87b1cb7f8007f0a066df3e9" Feb 14 04:57:02 crc kubenswrapper[4867]: I0214 04:57:02.626541 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"725585c3102ae70fa410a91152e5b75475823051c87b1cb7f8007f0a066df3e9"} err="failed to get container status \"725585c3102ae70fa410a91152e5b75475823051c87b1cb7f8007f0a066df3e9\": rpc error: code = NotFound desc = could not find container \"725585c3102ae70fa410a91152e5b75475823051c87b1cb7f8007f0a066df3e9\": container with ID starting with 725585c3102ae70fa410a91152e5b75475823051c87b1cb7f8007f0a066df3e9 not found: ID does not exist" Feb 14 04:57:03 crc kubenswrapper[4867]: I0214 04:57:03.013268 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" path="/var/lib/kubelet/pods/7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0/volumes" Feb 14 04:57:44 crc kubenswrapper[4867]: I0214 04:57:44.685623 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nx5fz"] Feb 14 04:57:44 crc kubenswrapper[4867]: E0214 04:57:44.687454 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" containerName="extract-content" Feb 14 04:57:44 crc kubenswrapper[4867]: I0214 04:57:44.687476 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" containerName="extract-content" Feb 14 04:57:44 crc kubenswrapper[4867]: E0214 04:57:44.687519 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" containerName="extract-utilities" Feb 14 04:57:44 crc kubenswrapper[4867]: I0214 04:57:44.687527 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" containerName="extract-utilities" Feb 14 04:57:44 crc kubenswrapper[4867]: E0214 04:57:44.687542 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" containerName="registry-server" Feb 14 04:57:44 crc kubenswrapper[4867]: I0214 04:57:44.687548 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" containerName="registry-server" Feb 14 04:57:44 crc kubenswrapper[4867]: I0214 04:57:44.687781 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b5baa8c-6e53-4abd-9e8f-c76d2ce5d6c0" containerName="registry-server" Feb 14 04:57:44 crc kubenswrapper[4867]: I0214 04:57:44.692131 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:57:44 crc kubenswrapper[4867]: I0214 04:57:44.704321 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nx5fz"] Feb 14 04:57:44 crc kubenswrapper[4867]: I0214 04:57:44.730070 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngblr\" (UniqueName: \"kubernetes.io/projected/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-kube-api-access-ngblr\") pod \"certified-operators-nx5fz\" (UID: \"5c7159af-0dbf-4a2b-b483-522d4e6a28ab\") " pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:57:44 crc kubenswrapper[4867]: I0214 04:57:44.730217 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-utilities\") pod \"certified-operators-nx5fz\" (UID: \"5c7159af-0dbf-4a2b-b483-522d4e6a28ab\") " pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:57:44 crc kubenswrapper[4867]: I0214 04:57:44.730251 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-catalog-content\") pod \"certified-operators-nx5fz\" (UID: \"5c7159af-0dbf-4a2b-b483-522d4e6a28ab\") " pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:57:44 crc kubenswrapper[4867]: I0214 04:57:44.838560 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngblr\" (UniqueName: \"kubernetes.io/projected/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-kube-api-access-ngblr\") pod \"certified-operators-nx5fz\" (UID: \"5c7159af-0dbf-4a2b-b483-522d4e6a28ab\") " pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:57:44 crc kubenswrapper[4867]: I0214 04:57:44.839141 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-utilities\") pod \"certified-operators-nx5fz\" (UID: \"5c7159af-0dbf-4a2b-b483-522d4e6a28ab\") " pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:57:44 crc kubenswrapper[4867]: I0214 04:57:44.839280 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-catalog-content\") pod \"certified-operators-nx5fz\" (UID: \"5c7159af-0dbf-4a2b-b483-522d4e6a28ab\") " pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:57:44 crc kubenswrapper[4867]: I0214 04:57:44.840149 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-catalog-content\") pod \"certified-operators-nx5fz\" (UID: \"5c7159af-0dbf-4a2b-b483-522d4e6a28ab\") " pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:57:44 crc kubenswrapper[4867]: I0214 04:57:44.840431 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-utilities\") pod \"certified-operators-nx5fz\" (UID: \"5c7159af-0dbf-4a2b-b483-522d4e6a28ab\") " pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:57:44 crc kubenswrapper[4867]: I0214 04:57:44.872971 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngblr\" (UniqueName: \"kubernetes.io/projected/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-kube-api-access-ngblr\") pod \"certified-operators-nx5fz\" (UID: \"5c7159af-0dbf-4a2b-b483-522d4e6a28ab\") " pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:57:45 crc kubenswrapper[4867]: I0214 04:57:45.029228 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:57:45 crc kubenswrapper[4867]: I0214 04:57:45.599040 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nx5fz"] Feb 14 04:57:46 crc kubenswrapper[4867]: I0214 04:57:46.038128 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nx5fz" event={"ID":"5c7159af-0dbf-4a2b-b483-522d4e6a28ab","Type":"ContainerStarted","Data":"ada88f8d7d9e2b4d7ac7ce8690527bc5fd6680a0ad7c523addf8e3c666af1e66"} Feb 14 04:57:46 crc kubenswrapper[4867]: I0214 04:57:46.038533 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nx5fz" event={"ID":"5c7159af-0dbf-4a2b-b483-522d4e6a28ab","Type":"ContainerStarted","Data":"88f2bea0ce99dfcf034026bb7b57d2e0b66ee5141d7ee7aec3701eb987c003d7"} Feb 14 04:57:47 crc kubenswrapper[4867]: I0214 04:57:47.054477 4867 generic.go:334] "Generic (PLEG): container finished" podID="5c7159af-0dbf-4a2b-b483-522d4e6a28ab" containerID="ada88f8d7d9e2b4d7ac7ce8690527bc5fd6680a0ad7c523addf8e3c666af1e66" exitCode=0 Feb 14 04:57:47 crc kubenswrapper[4867]: I0214 04:57:47.054553 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nx5fz" event={"ID":"5c7159af-0dbf-4a2b-b483-522d4e6a28ab","Type":"ContainerDied","Data":"ada88f8d7d9e2b4d7ac7ce8690527bc5fd6680a0ad7c523addf8e3c666af1e66"} Feb 14 04:57:49 crc kubenswrapper[4867]: I0214 04:57:49.079899 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nx5fz" event={"ID":"5c7159af-0dbf-4a2b-b483-522d4e6a28ab","Type":"ContainerStarted","Data":"4430d66ac8e03a617f21f2b5aafada4dd2fbeac1543b1271caa80ec11fcd3af9"} Feb 14 04:57:51 crc kubenswrapper[4867]: I0214 04:57:51.106596 4867 generic.go:334] "Generic (PLEG): container finished" podID="5c7159af-0dbf-4a2b-b483-522d4e6a28ab" containerID="4430d66ac8e03a617f21f2b5aafada4dd2fbeac1543b1271caa80ec11fcd3af9" exitCode=0 Feb 14 04:57:51 crc kubenswrapper[4867]: I0214 04:57:51.106695 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nx5fz" event={"ID":"5c7159af-0dbf-4a2b-b483-522d4e6a28ab","Type":"ContainerDied","Data":"4430d66ac8e03a617f21f2b5aafada4dd2fbeac1543b1271caa80ec11fcd3af9"} Feb 14 04:57:52 crc kubenswrapper[4867]: I0214 04:57:52.123007 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nx5fz" event={"ID":"5c7159af-0dbf-4a2b-b483-522d4e6a28ab","Type":"ContainerStarted","Data":"e64b661e3de3dfc596fc4969138b032bf4c10f106ac72d06eba4224f9349acfa"} Feb 14 04:57:52 crc kubenswrapper[4867]: I0214 04:57:52.144903 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nx5fz" podStartSLOduration=3.6881285679999998 podStartE2EDuration="8.144878326s" podCreationTimestamp="2026-02-14 04:57:44 +0000 UTC" firstStartedPulling="2026-02-14 04:57:47.05747451 +0000 UTC m=+2899.138411824" lastFinishedPulling="2026-02-14 04:57:51.514224268 +0000 UTC m=+2903.595161582" observedRunningTime="2026-02-14 04:57:52.140668986 +0000 UTC m=+2904.221606300" watchObservedRunningTime="2026-02-14 04:57:52.144878326 +0000 UTC m=+2904.225815650" Feb 14 04:57:55 crc kubenswrapper[4867]: I0214 04:57:55.029996 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:57:55 crc kubenswrapper[4867]: I0214 04:57:55.030633 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:57:55 crc kubenswrapper[4867]: I0214 04:57:55.085169 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:58:01 crc kubenswrapper[4867]: I0214 04:58:01.250627 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:58:01 crc kubenswrapper[4867]: I0214 04:58:01.250985 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:58:05 crc kubenswrapper[4867]: I0214 04:58:05.094787 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:58:05 crc kubenswrapper[4867]: I0214 04:58:05.167308 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nx5fz"] Feb 14 04:58:05 crc kubenswrapper[4867]: I0214 04:58:05.264039 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nx5fz" podUID="5c7159af-0dbf-4a2b-b483-522d4e6a28ab" containerName="registry-server" containerID="cri-o://e64b661e3de3dfc596fc4969138b032bf4c10f106ac72d06eba4224f9349acfa" gracePeriod=2 Feb 14 04:58:05 crc kubenswrapper[4867]: I0214 04:58:05.888644 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:58:05 crc kubenswrapper[4867]: I0214 04:58:05.978984 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-catalog-content\") pod \"5c7159af-0dbf-4a2b-b483-522d4e6a28ab\" (UID: \"5c7159af-0dbf-4a2b-b483-522d4e6a28ab\") " Feb 14 04:58:05 crc kubenswrapper[4867]: I0214 04:58:05.979064 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-utilities\") pod \"5c7159af-0dbf-4a2b-b483-522d4e6a28ab\" (UID: \"5c7159af-0dbf-4a2b-b483-522d4e6a28ab\") " Feb 14 04:58:05 crc kubenswrapper[4867]: I0214 04:58:05.979369 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngblr\" (UniqueName: \"kubernetes.io/projected/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-kube-api-access-ngblr\") pod \"5c7159af-0dbf-4a2b-b483-522d4e6a28ab\" (UID: \"5c7159af-0dbf-4a2b-b483-522d4e6a28ab\") " Feb 14 04:58:05 crc kubenswrapper[4867]: I0214 04:58:05.980092 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-utilities" (OuterVolumeSpecName: "utilities") pod "5c7159af-0dbf-4a2b-b483-522d4e6a28ab" (UID: "5c7159af-0dbf-4a2b-b483-522d4e6a28ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:58:05 crc kubenswrapper[4867]: I0214 04:58:05.986552 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-kube-api-access-ngblr" (OuterVolumeSpecName: "kube-api-access-ngblr") pod "5c7159af-0dbf-4a2b-b483-522d4e6a28ab" (UID: "5c7159af-0dbf-4a2b-b483-522d4e6a28ab"). InnerVolumeSpecName "kube-api-access-ngblr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.025137 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c7159af-0dbf-4a2b-b483-522d4e6a28ab" (UID: "5c7159af-0dbf-4a2b-b483-522d4e6a28ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.083217 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.083262 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.083275 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngblr\" (UniqueName: \"kubernetes.io/projected/5c7159af-0dbf-4a2b-b483-522d4e6a28ab-kube-api-access-ngblr\") on node \"crc\" DevicePath \"\"" Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.277608 4867 generic.go:334] "Generic (PLEG): container finished" podID="5c7159af-0dbf-4a2b-b483-522d4e6a28ab" containerID="e64b661e3de3dfc596fc4969138b032bf4c10f106ac72d06eba4224f9349acfa" exitCode=0 Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.277721 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nx5fz" Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.277687 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nx5fz" event={"ID":"5c7159af-0dbf-4a2b-b483-522d4e6a28ab","Type":"ContainerDied","Data":"e64b661e3de3dfc596fc4969138b032bf4c10f106ac72d06eba4224f9349acfa"} Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.279482 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nx5fz" event={"ID":"5c7159af-0dbf-4a2b-b483-522d4e6a28ab","Type":"ContainerDied","Data":"88f2bea0ce99dfcf034026bb7b57d2e0b66ee5141d7ee7aec3701eb987c003d7"} Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.279564 4867 scope.go:117] "RemoveContainer" containerID="e64b661e3de3dfc596fc4969138b032bf4c10f106ac72d06eba4224f9349acfa" Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.305563 4867 scope.go:117] "RemoveContainer" containerID="4430d66ac8e03a617f21f2b5aafada4dd2fbeac1543b1271caa80ec11fcd3af9" Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.321341 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nx5fz"] Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.336927 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nx5fz"] Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.346848 4867 scope.go:117] "RemoveContainer" containerID="ada88f8d7d9e2b4d7ac7ce8690527bc5fd6680a0ad7c523addf8e3c666af1e66" Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.399984 4867 scope.go:117] "RemoveContainer" containerID="e64b661e3de3dfc596fc4969138b032bf4c10f106ac72d06eba4224f9349acfa" Feb 14 04:58:06 crc kubenswrapper[4867]: E0214 04:58:06.400854 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e64b661e3de3dfc596fc4969138b032bf4c10f106ac72d06eba4224f9349acfa\": container with ID starting with e64b661e3de3dfc596fc4969138b032bf4c10f106ac72d06eba4224f9349acfa not found: ID does not exist" containerID="e64b661e3de3dfc596fc4969138b032bf4c10f106ac72d06eba4224f9349acfa" Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.400922 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e64b661e3de3dfc596fc4969138b032bf4c10f106ac72d06eba4224f9349acfa"} err="failed to get container status \"e64b661e3de3dfc596fc4969138b032bf4c10f106ac72d06eba4224f9349acfa\": rpc error: code = NotFound desc = could not find container \"e64b661e3de3dfc596fc4969138b032bf4c10f106ac72d06eba4224f9349acfa\": container with ID starting with e64b661e3de3dfc596fc4969138b032bf4c10f106ac72d06eba4224f9349acfa not found: ID does not exist" Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.400983 4867 scope.go:117] "RemoveContainer" containerID="4430d66ac8e03a617f21f2b5aafada4dd2fbeac1543b1271caa80ec11fcd3af9" Feb 14 04:58:06 crc kubenswrapper[4867]: E0214 04:58:06.401635 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4430d66ac8e03a617f21f2b5aafada4dd2fbeac1543b1271caa80ec11fcd3af9\": container with ID starting with 4430d66ac8e03a617f21f2b5aafada4dd2fbeac1543b1271caa80ec11fcd3af9 not found: ID does not exist" containerID="4430d66ac8e03a617f21f2b5aafada4dd2fbeac1543b1271caa80ec11fcd3af9" Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.401738 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4430d66ac8e03a617f21f2b5aafada4dd2fbeac1543b1271caa80ec11fcd3af9"} err="failed to get container status \"4430d66ac8e03a617f21f2b5aafada4dd2fbeac1543b1271caa80ec11fcd3af9\": rpc error: code = NotFound desc = could not find container \"4430d66ac8e03a617f21f2b5aafada4dd2fbeac1543b1271caa80ec11fcd3af9\": container with ID starting with 4430d66ac8e03a617f21f2b5aafada4dd2fbeac1543b1271caa80ec11fcd3af9 not found: ID does not exist" Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.401757 4867 scope.go:117] "RemoveContainer" containerID="ada88f8d7d9e2b4d7ac7ce8690527bc5fd6680a0ad7c523addf8e3c666af1e66" Feb 14 04:58:06 crc kubenswrapper[4867]: E0214 04:58:06.403865 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ada88f8d7d9e2b4d7ac7ce8690527bc5fd6680a0ad7c523addf8e3c666af1e66\": container with ID starting with ada88f8d7d9e2b4d7ac7ce8690527bc5fd6680a0ad7c523addf8e3c666af1e66 not found: ID does not exist" containerID="ada88f8d7d9e2b4d7ac7ce8690527bc5fd6680a0ad7c523addf8e3c666af1e66" Feb 14 04:58:06 crc kubenswrapper[4867]: I0214 04:58:06.403935 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ada88f8d7d9e2b4d7ac7ce8690527bc5fd6680a0ad7c523addf8e3c666af1e66"} err="failed to get container status \"ada88f8d7d9e2b4d7ac7ce8690527bc5fd6680a0ad7c523addf8e3c666af1e66\": rpc error: code = NotFound desc = could not find container \"ada88f8d7d9e2b4d7ac7ce8690527bc5fd6680a0ad7c523addf8e3c666af1e66\": container with ID starting with ada88f8d7d9e2b4d7ac7ce8690527bc5fd6680a0ad7c523addf8e3c666af1e66 not found: ID does not exist" Feb 14 04:58:07 crc kubenswrapper[4867]: I0214 04:58:07.013084 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c7159af-0dbf-4a2b-b483-522d4e6a28ab" path="/var/lib/kubelet/pods/5c7159af-0dbf-4a2b-b483-522d4e6a28ab/volumes" Feb 14 04:58:08 crc kubenswrapper[4867]: I0214 04:58:08.325664 4867 generic.go:334] "Generic (PLEG): container finished" podID="b70721c5-f29f-4cc4-8ee7-88341a81765d" containerID="fb47d4a4c558dace70949450fb42adb65e005b406785bd04b7e7c0bb95c122a8" exitCode=0 Feb 14 04:58:08 crc kubenswrapper[4867]: I0214 04:58:08.326204 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" event={"ID":"b70721c5-f29f-4cc4-8ee7-88341a81765d","Type":"ContainerDied","Data":"fb47d4a4c558dace70949450fb42adb65e005b406785bd04b7e7c0bb95c122a8"} Feb 14 04:58:09 crc kubenswrapper[4867]: I0214 04:58:09.859874 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:58:09 crc kubenswrapper[4867]: I0214 04:58:09.989447 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ssh-key-openstack-edpm-ipam\") pod \"b70721c5-f29f-4cc4-8ee7-88341a81765d\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " Feb 14 04:58:09 crc kubenswrapper[4867]: I0214 04:58:09.989635 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zhlx\" (UniqueName: \"kubernetes.io/projected/b70721c5-f29f-4cc4-8ee7-88341a81765d-kube-api-access-5zhlx\") pod \"b70721c5-f29f-4cc4-8ee7-88341a81765d\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " Feb 14 04:58:09 crc kubenswrapper[4867]: I0214 04:58:09.989731 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-2\") pod \"b70721c5-f29f-4cc4-8ee7-88341a81765d\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " Feb 14 04:58:09 crc kubenswrapper[4867]: I0214 04:58:09.989812 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-telemetry-combined-ca-bundle\") pod \"b70721c5-f29f-4cc4-8ee7-88341a81765d\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " Feb 14 04:58:09 crc kubenswrapper[4867]: I0214 04:58:09.989868 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-inventory\") pod \"b70721c5-f29f-4cc4-8ee7-88341a81765d\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " Feb 14 04:58:09 crc kubenswrapper[4867]: I0214 04:58:09.990661 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-1\") pod \"b70721c5-f29f-4cc4-8ee7-88341a81765d\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " Feb 14 04:58:09 crc kubenswrapper[4867]: I0214 04:58:09.990732 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-0\") pod \"b70721c5-f29f-4cc4-8ee7-88341a81765d\" (UID: \"b70721c5-f29f-4cc4-8ee7-88341a81765d\") " Feb 14 04:58:09 crc kubenswrapper[4867]: I0214 04:58:09.996249 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b70721c5-f29f-4cc4-8ee7-88341a81765d-kube-api-access-5zhlx" (OuterVolumeSpecName: "kube-api-access-5zhlx") pod "b70721c5-f29f-4cc4-8ee7-88341a81765d" (UID: "b70721c5-f29f-4cc4-8ee7-88341a81765d"). InnerVolumeSpecName "kube-api-access-5zhlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 04:58:09 crc kubenswrapper[4867]: I0214 04:58:09.998087 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "b70721c5-f29f-4cc4-8ee7-88341a81765d" (UID: "b70721c5-f29f-4cc4-8ee7-88341a81765d"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.022302 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "b70721c5-f29f-4cc4-8ee7-88341a81765d" (UID: "b70721c5-f29f-4cc4-8ee7-88341a81765d"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.023536 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "b70721c5-f29f-4cc4-8ee7-88341a81765d" (UID: "b70721c5-f29f-4cc4-8ee7-88341a81765d"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.025428 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-inventory" (OuterVolumeSpecName: "inventory") pod "b70721c5-f29f-4cc4-8ee7-88341a81765d" (UID: "b70721c5-f29f-4cc4-8ee7-88341a81765d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.031316 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "b70721c5-f29f-4cc4-8ee7-88341a81765d" (UID: "b70721c5-f29f-4cc4-8ee7-88341a81765d"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.032296 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b70721c5-f29f-4cc4-8ee7-88341a81765d" (UID: "b70721c5-f29f-4cc4-8ee7-88341a81765d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.094696 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zhlx\" (UniqueName: \"kubernetes.io/projected/b70721c5-f29f-4cc4-8ee7-88341a81765d-kube-api-access-5zhlx\") on node \"crc\" DevicePath \"\"" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.094740 4867 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.094779 4867 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.094794 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.094808 4867 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.094819 4867 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.094831 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b70721c5-f29f-4cc4-8ee7-88341a81765d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.347647 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" event={"ID":"b70721c5-f29f-4cc4-8ee7-88341a81765d","Type":"ContainerDied","Data":"1537e8bfe998fee74f949f5917923a54ff718a7829d5e8a62f41549a3acc0bf4"} Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.347694 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1537e8bfe998fee74f949f5917923a54ff718a7829d5e8a62f41549a3acc0bf4" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.347714 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.472181 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps"] Feb 14 04:58:10 crc kubenswrapper[4867]: E0214 04:58:10.472823 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c7159af-0dbf-4a2b-b483-522d4e6a28ab" containerName="extract-utilities" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.472851 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c7159af-0dbf-4a2b-b483-522d4e6a28ab" containerName="extract-utilities" Feb 14 04:58:10 crc kubenswrapper[4867]: E0214 04:58:10.472874 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b70721c5-f29f-4cc4-8ee7-88341a81765d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.472883 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b70721c5-f29f-4cc4-8ee7-88341a81765d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 14 04:58:10 crc kubenswrapper[4867]: E0214 04:58:10.472917 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c7159af-0dbf-4a2b-b483-522d4e6a28ab" containerName="extract-content" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.472925 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c7159af-0dbf-4a2b-b483-522d4e6a28ab" containerName="extract-content" Feb 14 04:58:10 crc kubenswrapper[4867]: E0214 04:58:10.472951 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c7159af-0dbf-4a2b-b483-522d4e6a28ab" containerName="registry-server" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.472958 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c7159af-0dbf-4a2b-b483-522d4e6a28ab" containerName="registry-server" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.473254 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b70721c5-f29f-4cc4-8ee7-88341a81765d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.473311 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c7159af-0dbf-4a2b-b483-522d4e6a28ab" containerName="registry-server" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.474772 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.491064 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.491270 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.491370 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.491798 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.492248 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.494592 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps"] Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.508533 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt5fm\" (UniqueName: \"kubernetes.io/projected/43f6ac0f-9203-4827-bd57-acbae7793028-kube-api-access-zt5fm\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.508648 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.508687 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.508722 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.509022 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.509463 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.509564 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.611954 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt5fm\" (UniqueName: \"kubernetes.io/projected/43f6ac0f-9203-4827-bd57-acbae7793028-kube-api-access-zt5fm\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.612419 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.612442 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.613249 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.613378 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.613664 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.613735 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.616366 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.617025 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.617440 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.617868 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.617947 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.618666 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.628828 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt5fm\" (UniqueName: \"kubernetes.io/projected/43f6ac0f-9203-4827-bd57-acbae7793028-kube-api-access-zt5fm\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:10 crc kubenswrapper[4867]: I0214 04:58:10.805693 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 04:58:11 crc kubenswrapper[4867]: I0214 04:58:11.341446 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps"] Feb 14 04:58:11 crc kubenswrapper[4867]: I0214 04:58:11.360540 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" event={"ID":"43f6ac0f-9203-4827-bd57-acbae7793028","Type":"ContainerStarted","Data":"a78661dba6d024e4f135e76ec3bde6ffb1cabf67e82e662a787795dbe9e05ef1"} Feb 14 04:58:12 crc kubenswrapper[4867]: I0214 04:58:12.372279 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" event={"ID":"43f6ac0f-9203-4827-bd57-acbae7793028","Type":"ContainerStarted","Data":"003d01ed9d647e03defd92a68ed32472c72d8cbdda637fda0cbae83f953fc73d"} Feb 14 04:58:31 crc kubenswrapper[4867]: I0214 04:58:31.250847 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:58:31 crc kubenswrapper[4867]: I0214 04:58:31.252677 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:59:01 crc kubenswrapper[4867]: I0214 04:59:01.250870 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 04:59:01 crc kubenswrapper[4867]: I0214 04:59:01.251471 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 04:59:01 crc kubenswrapper[4867]: I0214 04:59:01.251538 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 04:59:01 crc kubenswrapper[4867]: I0214 04:59:01.252486 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 04:59:01 crc kubenswrapper[4867]: I0214 04:59:01.252574 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" gracePeriod=600 Feb 14 04:59:01 crc kubenswrapper[4867]: E0214 04:59:01.375706 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:59:01 crc kubenswrapper[4867]: I0214 04:59:01.922520 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" exitCode=0 Feb 14 04:59:01 crc kubenswrapper[4867]: I0214 04:59:01.922573 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d"} Feb 14 04:59:01 crc kubenswrapper[4867]: I0214 04:59:01.922848 4867 scope.go:117] "RemoveContainer" containerID="e1b89ddb8a2754137d33a14676d4e33653c306a715ebb64010e116482bf849b7" Feb 14 04:59:01 crc kubenswrapper[4867]: I0214 04:59:01.923466 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 04:59:01 crc kubenswrapper[4867]: E0214 04:59:01.924112 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:59:01 crc kubenswrapper[4867]: I0214 04:59:01.966077 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" podStartSLOduration=51.587980054 podStartE2EDuration="51.966055947s" podCreationTimestamp="2026-02-14 04:58:10 +0000 UTC" firstStartedPulling="2026-02-14 04:58:11.350068092 +0000 UTC m=+2923.431005406" lastFinishedPulling="2026-02-14 04:58:11.728143985 +0000 UTC m=+2923.809081299" observedRunningTime="2026-02-14 04:58:12.391129333 +0000 UTC m=+2924.472066647" watchObservedRunningTime="2026-02-14 04:59:01.966055947 +0000 UTC m=+2974.046993261" Feb 14 04:59:15 crc kubenswrapper[4867]: I0214 04:59:15.998456 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 04:59:16 crc kubenswrapper[4867]: E0214 04:59:15.999773 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:59:29 crc kubenswrapper[4867]: I0214 04:59:29.997353 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 04:59:30 crc kubenswrapper[4867]: E0214 04:59:29.998413 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:59:44 crc kubenswrapper[4867]: I0214 04:59:44.998026 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 04:59:44 crc kubenswrapper[4867]: E0214 04:59:44.998997 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 04:59:59 crc kubenswrapper[4867]: I0214 04:59:59.021733 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 04:59:59 crc kubenswrapper[4867]: E0214 04:59:59.022520 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:00:00 crc kubenswrapper[4867]: I0214 05:00:00.169334 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd"] Feb 14 05:00:00 crc kubenswrapper[4867]: I0214 05:00:00.173542 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd" Feb 14 05:00:00 crc kubenswrapper[4867]: I0214 05:00:00.176069 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 05:00:00 crc kubenswrapper[4867]: I0214 05:00:00.178020 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 05:00:00 crc kubenswrapper[4867]: I0214 05:00:00.187645 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd"] Feb 14 05:00:00 crc kubenswrapper[4867]: I0214 05:00:00.242669 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f3d9933-ea61-47f2-a857-edd1af2baf67-config-volume\") pod \"collect-profiles-29517420-spkbd\" (UID: \"9f3d9933-ea61-47f2-a857-edd1af2baf67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd" Feb 14 05:00:00 crc kubenswrapper[4867]: I0214 05:00:00.243094 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f3d9933-ea61-47f2-a857-edd1af2baf67-secret-volume\") pod \"collect-profiles-29517420-spkbd\" (UID: \"9f3d9933-ea61-47f2-a857-edd1af2baf67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd" Feb 14 05:00:00 crc kubenswrapper[4867]: I0214 05:00:00.243892 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff5vd\" (UniqueName: \"kubernetes.io/projected/9f3d9933-ea61-47f2-a857-edd1af2baf67-kube-api-access-ff5vd\") pod \"collect-profiles-29517420-spkbd\" (UID: \"9f3d9933-ea61-47f2-a857-edd1af2baf67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd" Feb 14 05:00:00 crc kubenswrapper[4867]: I0214 05:00:00.348175 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff5vd\" (UniqueName: \"kubernetes.io/projected/9f3d9933-ea61-47f2-a857-edd1af2baf67-kube-api-access-ff5vd\") pod \"collect-profiles-29517420-spkbd\" (UID: \"9f3d9933-ea61-47f2-a857-edd1af2baf67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd" Feb 14 05:00:00 crc kubenswrapper[4867]: I0214 05:00:00.348473 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f3d9933-ea61-47f2-a857-edd1af2baf67-config-volume\") pod \"collect-profiles-29517420-spkbd\" (UID: \"9f3d9933-ea61-47f2-a857-edd1af2baf67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd" Feb 14 05:00:00 crc kubenswrapper[4867]: I0214 05:00:00.348613 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f3d9933-ea61-47f2-a857-edd1af2baf67-secret-volume\") pod \"collect-profiles-29517420-spkbd\" (UID: \"9f3d9933-ea61-47f2-a857-edd1af2baf67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd" Feb 14 05:00:00 crc kubenswrapper[4867]: I0214 05:00:00.349848 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f3d9933-ea61-47f2-a857-edd1af2baf67-config-volume\") pod \"collect-profiles-29517420-spkbd\" (UID: \"9f3d9933-ea61-47f2-a857-edd1af2baf67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd" Feb 14 05:00:00 crc kubenswrapper[4867]: I0214 05:00:00.358556 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f3d9933-ea61-47f2-a857-edd1af2baf67-secret-volume\") pod \"collect-profiles-29517420-spkbd\" (UID: \"9f3d9933-ea61-47f2-a857-edd1af2baf67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd" Feb 14 05:00:00 crc kubenswrapper[4867]: I0214 05:00:00.371197 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff5vd\" (UniqueName: \"kubernetes.io/projected/9f3d9933-ea61-47f2-a857-edd1af2baf67-kube-api-access-ff5vd\") pod \"collect-profiles-29517420-spkbd\" (UID: \"9f3d9933-ea61-47f2-a857-edd1af2baf67\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd" Feb 14 05:00:00 crc kubenswrapper[4867]: I0214 05:00:00.505196 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd" Feb 14 05:00:01 crc kubenswrapper[4867]: I0214 05:00:01.015749 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd"] Feb 14 05:00:01 crc kubenswrapper[4867]: I0214 05:00:01.608078 4867 generic.go:334] "Generic (PLEG): container finished" podID="9f3d9933-ea61-47f2-a857-edd1af2baf67" containerID="7e47076001317bcb38834fe5f61417f02ae8109c8832987a242d29c2b0b144fa" exitCode=0 Feb 14 05:00:01 crc kubenswrapper[4867]: I0214 05:00:01.608667 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd" event={"ID":"9f3d9933-ea61-47f2-a857-edd1af2baf67","Type":"ContainerDied","Data":"7e47076001317bcb38834fe5f61417f02ae8109c8832987a242d29c2b0b144fa"} Feb 14 05:00:01 crc kubenswrapper[4867]: I0214 05:00:01.608710 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd" event={"ID":"9f3d9933-ea61-47f2-a857-edd1af2baf67","Type":"ContainerStarted","Data":"07c5b4135b70b75f89742118bc951a47f292a000ce7087054d3474ffc91ebd6c"} Feb 14 05:00:03 crc kubenswrapper[4867]: I0214 05:00:03.044722 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd" Feb 14 05:00:03 crc kubenswrapper[4867]: I0214 05:00:03.134076 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ff5vd\" (UniqueName: \"kubernetes.io/projected/9f3d9933-ea61-47f2-a857-edd1af2baf67-kube-api-access-ff5vd\") pod \"9f3d9933-ea61-47f2-a857-edd1af2baf67\" (UID: \"9f3d9933-ea61-47f2-a857-edd1af2baf67\") " Feb 14 05:00:03 crc kubenswrapper[4867]: I0214 05:00:03.134556 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f3d9933-ea61-47f2-a857-edd1af2baf67-secret-volume\") pod \"9f3d9933-ea61-47f2-a857-edd1af2baf67\" (UID: \"9f3d9933-ea61-47f2-a857-edd1af2baf67\") " Feb 14 05:00:03 crc kubenswrapper[4867]: I0214 05:00:03.135743 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f3d9933-ea61-47f2-a857-edd1af2baf67-config-volume\") pod \"9f3d9933-ea61-47f2-a857-edd1af2baf67\" (UID: \"9f3d9933-ea61-47f2-a857-edd1af2baf67\") " Feb 14 05:00:03 crc kubenswrapper[4867]: I0214 05:00:03.137697 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f3d9933-ea61-47f2-a857-edd1af2baf67-config-volume" (OuterVolumeSpecName: "config-volume") pod "9f3d9933-ea61-47f2-a857-edd1af2baf67" (UID: "9f3d9933-ea61-47f2-a857-edd1af2baf67"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 05:00:03 crc kubenswrapper[4867]: I0214 05:00:03.150882 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f3d9933-ea61-47f2-a857-edd1af2baf67-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9f3d9933-ea61-47f2-a857-edd1af2baf67" (UID: "9f3d9933-ea61-47f2-a857-edd1af2baf67"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:00:03 crc kubenswrapper[4867]: I0214 05:00:03.169038 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f3d9933-ea61-47f2-a857-edd1af2baf67-kube-api-access-ff5vd" (OuterVolumeSpecName: "kube-api-access-ff5vd") pod "9f3d9933-ea61-47f2-a857-edd1af2baf67" (UID: "9f3d9933-ea61-47f2-a857-edd1af2baf67"). InnerVolumeSpecName "kube-api-access-ff5vd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:00:03 crc kubenswrapper[4867]: I0214 05:00:03.239869 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ff5vd\" (UniqueName: \"kubernetes.io/projected/9f3d9933-ea61-47f2-a857-edd1af2baf67-kube-api-access-ff5vd\") on node \"crc\" DevicePath \"\"" Feb 14 05:00:03 crc kubenswrapper[4867]: I0214 05:00:03.239930 4867 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f3d9933-ea61-47f2-a857-edd1af2baf67-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 05:00:03 crc kubenswrapper[4867]: I0214 05:00:03.239947 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f3d9933-ea61-47f2-a857-edd1af2baf67-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 05:00:03 crc kubenswrapper[4867]: I0214 05:00:03.633918 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd" event={"ID":"9f3d9933-ea61-47f2-a857-edd1af2baf67","Type":"ContainerDied","Data":"07c5b4135b70b75f89742118bc951a47f292a000ce7087054d3474ffc91ebd6c"} Feb 14 05:00:03 crc kubenswrapper[4867]: I0214 05:00:03.634041 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07c5b4135b70b75f89742118bc951a47f292a000ce7087054d3474ffc91ebd6c" Feb 14 05:00:03 crc kubenswrapper[4867]: I0214 05:00:03.634003 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd" Feb 14 05:00:04 crc kubenswrapper[4867]: I0214 05:00:04.136563 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp"] Feb 14 05:00:04 crc kubenswrapper[4867]: I0214 05:00:04.149238 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517375-78vgp"] Feb 14 05:00:05 crc kubenswrapper[4867]: I0214 05:00:05.045767 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb80aae8-69eb-4098-af64-8a1ace025d53" path="/var/lib/kubelet/pods/cb80aae8-69eb-4098-af64-8a1ace025d53/volumes" Feb 14 05:00:11 crc kubenswrapper[4867]: I0214 05:00:11.833841 4867 generic.go:334] "Generic (PLEG): container finished" podID="43f6ac0f-9203-4827-bd57-acbae7793028" containerID="003d01ed9d647e03defd92a68ed32472c72d8cbdda637fda0cbae83f953fc73d" exitCode=0 Feb 14 05:00:11 crc kubenswrapper[4867]: I0214 05:00:11.833927 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" event={"ID":"43f6ac0f-9203-4827-bd57-acbae7793028","Type":"ContainerDied","Data":"003d01ed9d647e03defd92a68ed32472c72d8cbdda637fda0cbae83f953fc73d"} Feb 14 05:00:12 crc kubenswrapper[4867]: I0214 05:00:12.010555 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:00:12 crc kubenswrapper[4867]: E0214 05:00:12.011128 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.364535 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.415577 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-2\") pod \"43f6ac0f-9203-4827-bd57-acbae7793028\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.415714 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-inventory\") pod \"43f6ac0f-9203-4827-bd57-acbae7793028\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.415830 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-0\") pod \"43f6ac0f-9203-4827-bd57-acbae7793028\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.415984 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-1\") pod \"43f6ac0f-9203-4827-bd57-acbae7793028\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.416029 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ssh-key-openstack-edpm-ipam\") pod \"43f6ac0f-9203-4827-bd57-acbae7793028\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.416103 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt5fm\" (UniqueName: \"kubernetes.io/projected/43f6ac0f-9203-4827-bd57-acbae7793028-kube-api-access-zt5fm\") pod \"43f6ac0f-9203-4827-bd57-acbae7793028\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.416190 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-telemetry-power-monitoring-combined-ca-bundle\") pod \"43f6ac0f-9203-4827-bd57-acbae7793028\" (UID: \"43f6ac0f-9203-4827-bd57-acbae7793028\") " Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.421912 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "43f6ac0f-9203-4827-bd57-acbae7793028" (UID: "43f6ac0f-9203-4827-bd57-acbae7793028"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.429155 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43f6ac0f-9203-4827-bd57-acbae7793028-kube-api-access-zt5fm" (OuterVolumeSpecName: "kube-api-access-zt5fm") pod "43f6ac0f-9203-4827-bd57-acbae7793028" (UID: "43f6ac0f-9203-4827-bd57-acbae7793028"). InnerVolumeSpecName "kube-api-access-zt5fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.451004 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-inventory" (OuterVolumeSpecName: "inventory") pod "43f6ac0f-9203-4827-bd57-acbae7793028" (UID: "43f6ac0f-9203-4827-bd57-acbae7793028"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.464587 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "43f6ac0f-9203-4827-bd57-acbae7793028" (UID: "43f6ac0f-9203-4827-bd57-acbae7793028"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.480594 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "43f6ac0f-9203-4827-bd57-acbae7793028" (UID: "43f6ac0f-9203-4827-bd57-acbae7793028"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.484390 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "43f6ac0f-9203-4827-bd57-acbae7793028" (UID: "43f6ac0f-9203-4827-bd57-acbae7793028"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.488428 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "43f6ac0f-9203-4827-bd57-acbae7793028" (UID: "43f6ac0f-9203-4827-bd57-acbae7793028"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.519288 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.519325 4867 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.519336 4867 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.519347 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.519357 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt5fm\" (UniqueName: \"kubernetes.io/projected/43f6ac0f-9203-4827-bd57-acbae7793028-kube-api-access-zt5fm\") on node \"crc\" DevicePath \"\"" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.519366 4867 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.519376 4867 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/43f6ac0f-9203-4827-bd57-acbae7793028-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.855369 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" event={"ID":"43f6ac0f-9203-4827-bd57-acbae7793028","Type":"ContainerDied","Data":"a78661dba6d024e4f135e76ec3bde6ffb1cabf67e82e662a787795dbe9e05ef1"} Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.855732 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a78661dba6d024e4f135e76ec3bde6ffb1cabf67e82e662a787795dbe9e05ef1" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.855467 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.963346 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5"] Feb 14 05:00:13 crc kubenswrapper[4867]: E0214 05:00:13.964044 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f3d9933-ea61-47f2-a857-edd1af2baf67" containerName="collect-profiles" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.964071 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f3d9933-ea61-47f2-a857-edd1af2baf67" containerName="collect-profiles" Feb 14 05:00:13 crc kubenswrapper[4867]: E0214 05:00:13.964095 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43f6ac0f-9203-4827-bd57-acbae7793028" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.964107 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="43f6ac0f-9203-4827-bd57-acbae7793028" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.964403 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f3d9933-ea61-47f2-a857-edd1af2baf67" containerName="collect-profiles" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.964450 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="43f6ac0f-9203-4827-bd57-acbae7793028" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.965847 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.968487 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.968537 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-24tmg" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.969135 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.969390 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.969719 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Feb 14 05:00:13 crc kubenswrapper[4867]: I0214 05:00:13.981653 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5"] Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.030812 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsxkb\" (UniqueName: \"kubernetes.io/projected/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-kube-api-access-xsxkb\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jgnc5\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.030883 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jgnc5\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.030950 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jgnc5\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.031147 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jgnc5\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.031187 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jgnc5\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.133402 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jgnc5\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.133876 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jgnc5\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.134054 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsxkb\" (UniqueName: \"kubernetes.io/projected/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-kube-api-access-xsxkb\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jgnc5\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.134187 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jgnc5\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.134347 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jgnc5\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.139990 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jgnc5\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.140269 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jgnc5\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.147075 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jgnc5\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.148977 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jgnc5\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.151636 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsxkb\" (UniqueName: \"kubernetes.io/projected/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-kube-api-access-xsxkb\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jgnc5\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.290124 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.836593 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5"] Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.845410 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 05:00:14 crc kubenswrapper[4867]: I0214 05:00:14.870391 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" event={"ID":"6e133b22-e3ca-4be2-8e71-56b6ca79dab2","Type":"ContainerStarted","Data":"1e7f446376a872199c67180e67f61670518fde5c6f9e9ab3cf68a8b60a35e783"} Feb 14 05:00:15 crc kubenswrapper[4867]: I0214 05:00:15.855546 4867 scope.go:117] "RemoveContainer" containerID="5dc1b7ab37c9c3df2b530ac74d487ec3f80c14970b4446bee10e3a796e0af837" Feb 14 05:00:15 crc kubenswrapper[4867]: I0214 05:00:15.882129 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" event={"ID":"6e133b22-e3ca-4be2-8e71-56b6ca79dab2","Type":"ContainerStarted","Data":"fd5ea480ef3a3e063a60881d0bda6df9eff17175cb1496b51571f74ef0c13c57"} Feb 14 05:00:15 crc kubenswrapper[4867]: I0214 05:00:15.966137 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" podStartSLOduration=2.542089626 podStartE2EDuration="2.966109116s" podCreationTimestamp="2026-02-14 05:00:13 +0000 UTC" firstStartedPulling="2026-02-14 05:00:14.845218947 +0000 UTC m=+3046.926156261" lastFinishedPulling="2026-02-14 05:00:15.269238427 +0000 UTC m=+3047.350175751" observedRunningTime="2026-02-14 05:00:15.93163682 +0000 UTC m=+3048.012574154" watchObservedRunningTime="2026-02-14 05:00:15.966109116 +0000 UTC m=+3048.047046430" Feb 14 05:00:22 crc kubenswrapper[4867]: I0214 05:00:22.998250 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:00:23 crc kubenswrapper[4867]: E0214 05:00:23.000789 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:00:31 crc kubenswrapper[4867]: I0214 05:00:31.044652 4867 generic.go:334] "Generic (PLEG): container finished" podID="6e133b22-e3ca-4be2-8e71-56b6ca79dab2" containerID="fd5ea480ef3a3e063a60881d0bda6df9eff17175cb1496b51571f74ef0c13c57" exitCode=0 Feb 14 05:00:31 crc kubenswrapper[4867]: I0214 05:00:31.044733 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" event={"ID":"6e133b22-e3ca-4be2-8e71-56b6ca79dab2","Type":"ContainerDied","Data":"fd5ea480ef3a3e063a60881d0bda6df9eff17175cb1496b51571f74ef0c13c57"} Feb 14 05:00:32 crc kubenswrapper[4867]: I0214 05:00:32.558855 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:32 crc kubenswrapper[4867]: I0214 05:00:32.694907 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-inventory\") pod \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " Feb 14 05:00:32 crc kubenswrapper[4867]: I0214 05:00:32.695107 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-logging-compute-config-data-1\") pod \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " Feb 14 05:00:32 crc kubenswrapper[4867]: I0214 05:00:32.695390 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsxkb\" (UniqueName: \"kubernetes.io/projected/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-kube-api-access-xsxkb\") pod \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " Feb 14 05:00:32 crc kubenswrapper[4867]: I0214 05:00:32.695562 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-ssh-key-openstack-edpm-ipam\") pod \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " Feb 14 05:00:32 crc kubenswrapper[4867]: I0214 05:00:32.695860 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-logging-compute-config-data-0\") pod \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\" (UID: \"6e133b22-e3ca-4be2-8e71-56b6ca79dab2\") " Feb 14 05:00:32 crc kubenswrapper[4867]: I0214 05:00:32.713290 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-kube-api-access-xsxkb" (OuterVolumeSpecName: "kube-api-access-xsxkb") pod "6e133b22-e3ca-4be2-8e71-56b6ca79dab2" (UID: "6e133b22-e3ca-4be2-8e71-56b6ca79dab2"). InnerVolumeSpecName "kube-api-access-xsxkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:00:32 crc kubenswrapper[4867]: I0214 05:00:32.740731 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "6e133b22-e3ca-4be2-8e71-56b6ca79dab2" (UID: "6e133b22-e3ca-4be2-8e71-56b6ca79dab2"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:00:32 crc kubenswrapper[4867]: I0214 05:00:32.743197 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6e133b22-e3ca-4be2-8e71-56b6ca79dab2" (UID: "6e133b22-e3ca-4be2-8e71-56b6ca79dab2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:00:32 crc kubenswrapper[4867]: I0214 05:00:32.751741 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "6e133b22-e3ca-4be2-8e71-56b6ca79dab2" (UID: "6e133b22-e3ca-4be2-8e71-56b6ca79dab2"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:00:32 crc kubenswrapper[4867]: I0214 05:00:32.767997 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-inventory" (OuterVolumeSpecName: "inventory") pod "6e133b22-e3ca-4be2-8e71-56b6ca79dab2" (UID: "6e133b22-e3ca-4be2-8e71-56b6ca79dab2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:00:32 crc kubenswrapper[4867]: I0214 05:00:32.800432 4867 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 14 05:00:32 crc kubenswrapper[4867]: I0214 05:00:32.800472 4867 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-inventory\") on node \"crc\" DevicePath \"\"" Feb 14 05:00:32 crc kubenswrapper[4867]: I0214 05:00:32.800485 4867 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 14 05:00:32 crc kubenswrapper[4867]: I0214 05:00:32.800495 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsxkb\" (UniqueName: \"kubernetes.io/projected/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-kube-api-access-xsxkb\") on node \"crc\" DevicePath \"\"" Feb 14 05:00:32 crc kubenswrapper[4867]: I0214 05:00:32.800526 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6e133b22-e3ca-4be2-8e71-56b6ca79dab2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 14 05:00:33 crc kubenswrapper[4867]: I0214 05:00:33.066128 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" event={"ID":"6e133b22-e3ca-4be2-8e71-56b6ca79dab2","Type":"ContainerDied","Data":"1e7f446376a872199c67180e67f61670518fde5c6f9e9ab3cf68a8b60a35e783"} Feb 14 05:00:33 crc kubenswrapper[4867]: I0214 05:00:33.066191 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e7f446376a872199c67180e67f61670518fde5c6f9e9ab3cf68a8b60a35e783" Feb 14 05:00:33 crc kubenswrapper[4867]: I0214 05:00:33.066265 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jgnc5" Feb 14 05:00:37 crc kubenswrapper[4867]: I0214 05:00:37.998112 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:00:37 crc kubenswrapper[4867]: E0214 05:00:37.998889 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:00:49 crc kubenswrapper[4867]: I0214 05:00:49.012571 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:00:49 crc kubenswrapper[4867]: E0214 05:00:49.014011 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.165433 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29517421-jh7t8"] Feb 14 05:01:00 crc kubenswrapper[4867]: E0214 05:01:00.166399 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e133b22-e3ca-4be2-8e71-56b6ca79dab2" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.166415 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e133b22-e3ca-4be2-8e71-56b6ca79dab2" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.166676 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e133b22-e3ca-4be2-8e71-56b6ca79dab2" containerName="logging-edpm-deployment-openstack-edpm-ipam" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.167443 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29517421-jh7t8" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.179349 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29517421-jh7t8"] Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.241295 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-config-data\") pod \"keystone-cron-29517421-jh7t8\" (UID: \"dabbee2b-0869-439e-8c9c-f417ab44f850\") " pod="openstack/keystone-cron-29517421-jh7t8" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.241865 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9rjf\" (UniqueName: \"kubernetes.io/projected/dabbee2b-0869-439e-8c9c-f417ab44f850-kube-api-access-f9rjf\") pod \"keystone-cron-29517421-jh7t8\" (UID: \"dabbee2b-0869-439e-8c9c-f417ab44f850\") " pod="openstack/keystone-cron-29517421-jh7t8" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.242186 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-fernet-keys\") pod \"keystone-cron-29517421-jh7t8\" (UID: \"dabbee2b-0869-439e-8c9c-f417ab44f850\") " pod="openstack/keystone-cron-29517421-jh7t8" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.242235 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-combined-ca-bundle\") pod \"keystone-cron-29517421-jh7t8\" (UID: \"dabbee2b-0869-439e-8c9c-f417ab44f850\") " pod="openstack/keystone-cron-29517421-jh7t8" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.345137 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-fernet-keys\") pod \"keystone-cron-29517421-jh7t8\" (UID: \"dabbee2b-0869-439e-8c9c-f417ab44f850\") " pod="openstack/keystone-cron-29517421-jh7t8" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.345193 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-combined-ca-bundle\") pod \"keystone-cron-29517421-jh7t8\" (UID: \"dabbee2b-0869-439e-8c9c-f417ab44f850\") " pod="openstack/keystone-cron-29517421-jh7t8" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.345291 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-config-data\") pod \"keystone-cron-29517421-jh7t8\" (UID: \"dabbee2b-0869-439e-8c9c-f417ab44f850\") " pod="openstack/keystone-cron-29517421-jh7t8" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.345417 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9rjf\" (UniqueName: \"kubernetes.io/projected/dabbee2b-0869-439e-8c9c-f417ab44f850-kube-api-access-f9rjf\") pod \"keystone-cron-29517421-jh7t8\" (UID: \"dabbee2b-0869-439e-8c9c-f417ab44f850\") " pod="openstack/keystone-cron-29517421-jh7t8" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.360299 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-fernet-keys\") pod \"keystone-cron-29517421-jh7t8\" (UID: \"dabbee2b-0869-439e-8c9c-f417ab44f850\") " pod="openstack/keystone-cron-29517421-jh7t8" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.361131 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-combined-ca-bundle\") pod \"keystone-cron-29517421-jh7t8\" (UID: \"dabbee2b-0869-439e-8c9c-f417ab44f850\") " pod="openstack/keystone-cron-29517421-jh7t8" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.364228 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-config-data\") pod \"keystone-cron-29517421-jh7t8\" (UID: \"dabbee2b-0869-439e-8c9c-f417ab44f850\") " pod="openstack/keystone-cron-29517421-jh7t8" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.370644 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9rjf\" (UniqueName: \"kubernetes.io/projected/dabbee2b-0869-439e-8c9c-f417ab44f850-kube-api-access-f9rjf\") pod \"keystone-cron-29517421-jh7t8\" (UID: \"dabbee2b-0869-439e-8c9c-f417ab44f850\") " pod="openstack/keystone-cron-29517421-jh7t8" Feb 14 05:01:00 crc kubenswrapper[4867]: I0214 05:01:00.500267 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29517421-jh7t8" Feb 14 05:01:01 crc kubenswrapper[4867]: I0214 05:01:01.001379 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:01:01 crc kubenswrapper[4867]: E0214 05:01:01.003050 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:01:01 crc kubenswrapper[4867]: I0214 05:01:01.180248 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29517421-jh7t8"] Feb 14 05:01:01 crc kubenswrapper[4867]: I0214 05:01:01.387290 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29517421-jh7t8" event={"ID":"dabbee2b-0869-439e-8c9c-f417ab44f850","Type":"ContainerStarted","Data":"816ebd413d81e166bfe420e2d22e7ab22783d6ed6ec35937830d65a3c1c8e37d"} Feb 14 05:01:02 crc kubenswrapper[4867]: I0214 05:01:02.402149 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29517421-jh7t8" event={"ID":"dabbee2b-0869-439e-8c9c-f417ab44f850","Type":"ContainerStarted","Data":"7677ee816b0e5bb144d41267e4d59e1a5c59160f5592ed5850a45af78284d93b"} Feb 14 05:01:02 crc kubenswrapper[4867]: I0214 05:01:02.439172 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29517421-jh7t8" podStartSLOduration=2.439141227 podStartE2EDuration="2.439141227s" podCreationTimestamp="2026-02-14 05:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 05:01:02.434652879 +0000 UTC m=+3094.515590203" watchObservedRunningTime="2026-02-14 05:01:02.439141227 +0000 UTC m=+3094.520078541" Feb 14 05:01:05 crc kubenswrapper[4867]: I0214 05:01:05.447383 4867 generic.go:334] "Generic (PLEG): container finished" podID="dabbee2b-0869-439e-8c9c-f417ab44f850" containerID="7677ee816b0e5bb144d41267e4d59e1a5c59160f5592ed5850a45af78284d93b" exitCode=0 Feb 14 05:01:05 crc kubenswrapper[4867]: I0214 05:01:05.447458 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29517421-jh7t8" event={"ID":"dabbee2b-0869-439e-8c9c-f417ab44f850","Type":"ContainerDied","Data":"7677ee816b0e5bb144d41267e4d59e1a5c59160f5592ed5850a45af78284d93b"} Feb 14 05:01:06 crc kubenswrapper[4867]: I0214 05:01:06.956607 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29517421-jh7t8" Feb 14 05:01:07 crc kubenswrapper[4867]: I0214 05:01:07.064466 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9rjf\" (UniqueName: \"kubernetes.io/projected/dabbee2b-0869-439e-8c9c-f417ab44f850-kube-api-access-f9rjf\") pod \"dabbee2b-0869-439e-8c9c-f417ab44f850\" (UID: \"dabbee2b-0869-439e-8c9c-f417ab44f850\") " Feb 14 05:01:07 crc kubenswrapper[4867]: I0214 05:01:07.064537 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-config-data\") pod \"dabbee2b-0869-439e-8c9c-f417ab44f850\" (UID: \"dabbee2b-0869-439e-8c9c-f417ab44f850\") " Feb 14 05:01:07 crc kubenswrapper[4867]: I0214 05:01:07.064613 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-combined-ca-bundle\") pod \"dabbee2b-0869-439e-8c9c-f417ab44f850\" (UID: \"dabbee2b-0869-439e-8c9c-f417ab44f850\") " Feb 14 05:01:07 crc kubenswrapper[4867]: I0214 05:01:07.064775 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-fernet-keys\") pod \"dabbee2b-0869-439e-8c9c-f417ab44f850\" (UID: \"dabbee2b-0869-439e-8c9c-f417ab44f850\") " Feb 14 05:01:07 crc kubenswrapper[4867]: I0214 05:01:07.086926 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dabbee2b-0869-439e-8c9c-f417ab44f850-kube-api-access-f9rjf" (OuterVolumeSpecName: "kube-api-access-f9rjf") pod "dabbee2b-0869-439e-8c9c-f417ab44f850" (UID: "dabbee2b-0869-439e-8c9c-f417ab44f850"). InnerVolumeSpecName "kube-api-access-f9rjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:01:07 crc kubenswrapper[4867]: I0214 05:01:07.092157 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "dabbee2b-0869-439e-8c9c-f417ab44f850" (UID: "dabbee2b-0869-439e-8c9c-f417ab44f850"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:01:07 crc kubenswrapper[4867]: I0214 05:01:07.139991 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dabbee2b-0869-439e-8c9c-f417ab44f850" (UID: "dabbee2b-0869-439e-8c9c-f417ab44f850"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:01:07 crc kubenswrapper[4867]: I0214 05:01:07.165050 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-config-data" (OuterVolumeSpecName: "config-data") pod "dabbee2b-0869-439e-8c9c-f417ab44f850" (UID: "dabbee2b-0869-439e-8c9c-f417ab44f850"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:01:07 crc kubenswrapper[4867]: I0214 05:01:07.174257 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9rjf\" (UniqueName: \"kubernetes.io/projected/dabbee2b-0869-439e-8c9c-f417ab44f850-kube-api-access-f9rjf\") on node \"crc\" DevicePath \"\"" Feb 14 05:01:07 crc kubenswrapper[4867]: I0214 05:01:07.174314 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 05:01:07 crc kubenswrapper[4867]: I0214 05:01:07.174329 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 05:01:07 crc kubenswrapper[4867]: I0214 05:01:07.174338 4867 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dabbee2b-0869-439e-8c9c-f417ab44f850-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 14 05:01:07 crc kubenswrapper[4867]: I0214 05:01:07.478217 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29517421-jh7t8" event={"ID":"dabbee2b-0869-439e-8c9c-f417ab44f850","Type":"ContainerDied","Data":"816ebd413d81e166bfe420e2d22e7ab22783d6ed6ec35937830d65a3c1c8e37d"} Feb 14 05:01:07 crc kubenswrapper[4867]: I0214 05:01:07.478261 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="816ebd413d81e166bfe420e2d22e7ab22783d6ed6ec35937830d65a3c1c8e37d" Feb 14 05:01:07 crc kubenswrapper[4867]: I0214 05:01:07.478299 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29517421-jh7t8" Feb 14 05:01:15 crc kubenswrapper[4867]: I0214 05:01:15.997966 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:01:16 crc kubenswrapper[4867]: E0214 05:01:15.998833 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:01:27 crc kubenswrapper[4867]: I0214 05:01:27.998007 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:01:27 crc kubenswrapper[4867]: E0214 05:01:27.998853 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:01:41 crc kubenswrapper[4867]: I0214 05:01:40.999142 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:01:41 crc kubenswrapper[4867]: E0214 05:01:41.001955 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:01:54 crc kubenswrapper[4867]: I0214 05:01:54.997560 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:01:54 crc kubenswrapper[4867]: E0214 05:01:54.998623 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:02:09 crc kubenswrapper[4867]: I0214 05:02:09.998156 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:02:10 crc kubenswrapper[4867]: E0214 05:02:09.999311 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:02:20 crc kubenswrapper[4867]: I0214 05:02:20.997550 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:02:20 crc kubenswrapper[4867]: E0214 05:02:20.998926 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:02:33 crc kubenswrapper[4867]: I0214 05:02:33.997406 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:02:33 crc kubenswrapper[4867]: E0214 05:02:33.998194 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:02:44 crc kubenswrapper[4867]: I0214 05:02:44.997939 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:02:44 crc kubenswrapper[4867]: E0214 05:02:44.999033 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:02:59 crc kubenswrapper[4867]: I0214 05:02:59.997531 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:03:00 crc kubenswrapper[4867]: E0214 05:02:59.998487 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:03:13 crc kubenswrapper[4867]: I0214 05:03:13.997990 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:03:14 crc kubenswrapper[4867]: E0214 05:03:13.998845 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:03:25 crc kubenswrapper[4867]: I0214 05:03:25.998254 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:03:26 crc kubenswrapper[4867]: E0214 05:03:25.999414 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:03:40 crc kubenswrapper[4867]: I0214 05:03:40.997462 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:03:40 crc kubenswrapper[4867]: E0214 05:03:40.998663 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:03:54 crc kubenswrapper[4867]: I0214 05:03:54.998173 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:03:54 crc kubenswrapper[4867]: E0214 05:03:54.999019 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:04:05 crc kubenswrapper[4867]: I0214 05:04:05.998280 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:04:06 crc kubenswrapper[4867]: I0214 05:04:06.613531 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"863d05e2c2e5d1963a43470517034f45e340fcf76621f87d3a0804ee07159c7e"} Feb 14 05:05:35 crc kubenswrapper[4867]: E0214 05:05:35.218803 4867 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.113:53458->38.102.83.113:33373: write tcp 38.102.83.113:53458->38.102.83.113:33373: write: broken pipe Feb 14 05:06:31 crc kubenswrapper[4867]: I0214 05:06:31.250611 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:06:31 crc kubenswrapper[4867]: I0214 05:06:31.251226 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:07:01 crc kubenswrapper[4867]: I0214 05:07:01.250698 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:07:01 crc kubenswrapper[4867]: I0214 05:07:01.251222 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:07:31 crc kubenswrapper[4867]: I0214 05:07:31.250497 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:07:31 crc kubenswrapper[4867]: I0214 05:07:31.251551 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:07:31 crc kubenswrapper[4867]: I0214 05:07:31.251631 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 05:07:31 crc kubenswrapper[4867]: I0214 05:07:31.253100 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"863d05e2c2e5d1963a43470517034f45e340fcf76621f87d3a0804ee07159c7e"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 05:07:31 crc kubenswrapper[4867]: I0214 05:07:31.253186 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://863d05e2c2e5d1963a43470517034f45e340fcf76621f87d3a0804ee07159c7e" gracePeriod=600 Feb 14 05:07:31 crc kubenswrapper[4867]: I0214 05:07:31.619530 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="863d05e2c2e5d1963a43470517034f45e340fcf76621f87d3a0804ee07159c7e" exitCode=0 Feb 14 05:07:31 crc kubenswrapper[4867]: I0214 05:07:31.619621 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"863d05e2c2e5d1963a43470517034f45e340fcf76621f87d3a0804ee07159c7e"} Feb 14 05:07:31 crc kubenswrapper[4867]: I0214 05:07:31.619959 4867 scope.go:117] "RemoveContainer" containerID="af0906a53bc116fc9f684815c9db0ec3a71e62ba875fd0da6af484a9d2f2ec7d" Feb 14 05:07:32 crc kubenswrapper[4867]: I0214 05:07:32.632275 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025"} Feb 14 05:07:44 crc kubenswrapper[4867]: I0214 05:07:44.820479 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-849hf"] Feb 14 05:07:44 crc kubenswrapper[4867]: E0214 05:07:44.821666 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dabbee2b-0869-439e-8c9c-f417ab44f850" containerName="keystone-cron" Feb 14 05:07:44 crc kubenswrapper[4867]: I0214 05:07:44.821682 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="dabbee2b-0869-439e-8c9c-f417ab44f850" containerName="keystone-cron" Feb 14 05:07:44 crc kubenswrapper[4867]: I0214 05:07:44.821987 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="dabbee2b-0869-439e-8c9c-f417ab44f850" containerName="keystone-cron" Feb 14 05:07:44 crc kubenswrapper[4867]: I0214 05:07:44.824099 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:07:44 crc kubenswrapper[4867]: I0214 05:07:44.841106 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-849hf"] Feb 14 05:07:44 crc kubenswrapper[4867]: I0214 05:07:44.981640 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-catalog-content\") pod \"redhat-operators-849hf\" (UID: \"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913\") " pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:07:44 crc kubenswrapper[4867]: I0214 05:07:44.981773 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkwvx\" (UniqueName: \"kubernetes.io/projected/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-kube-api-access-jkwvx\") pod \"redhat-operators-849hf\" (UID: \"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913\") " pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:07:44 crc kubenswrapper[4867]: I0214 05:07:44.981867 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-utilities\") pod \"redhat-operators-849hf\" (UID: \"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913\") " pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:07:45 crc kubenswrapper[4867]: I0214 05:07:45.085571 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-catalog-content\") pod \"redhat-operators-849hf\" (UID: \"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913\") " pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:07:45 crc kubenswrapper[4867]: I0214 05:07:45.086035 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-catalog-content\") pod \"redhat-operators-849hf\" (UID: \"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913\") " pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:07:45 crc kubenswrapper[4867]: I0214 05:07:45.086194 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkwvx\" (UniqueName: \"kubernetes.io/projected/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-kube-api-access-jkwvx\") pod \"redhat-operators-849hf\" (UID: \"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913\") " pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:07:45 crc kubenswrapper[4867]: I0214 05:07:45.086659 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-utilities\") pod \"redhat-operators-849hf\" (UID: \"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913\") " pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:07:45 crc kubenswrapper[4867]: I0214 05:07:45.087183 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-utilities\") pod \"redhat-operators-849hf\" (UID: \"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913\") " pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:07:45 crc kubenswrapper[4867]: I0214 05:07:45.110132 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkwvx\" (UniqueName: \"kubernetes.io/projected/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-kube-api-access-jkwvx\") pod \"redhat-operators-849hf\" (UID: \"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913\") " pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:07:45 crc kubenswrapper[4867]: I0214 05:07:45.153750 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:07:45 crc kubenswrapper[4867]: I0214 05:07:45.928290 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-849hf"] Feb 14 05:07:46 crc kubenswrapper[4867]: I0214 05:07:46.781726 4867 generic.go:334] "Generic (PLEG): container finished" podID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerID="531118b5698c29e4c554c835ddc5e56e0cb2165c80336c4df5e447f587f66a36" exitCode=0 Feb 14 05:07:46 crc kubenswrapper[4867]: I0214 05:07:46.782694 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-849hf" event={"ID":"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913","Type":"ContainerDied","Data":"531118b5698c29e4c554c835ddc5e56e0cb2165c80336c4df5e447f587f66a36"} Feb 14 05:07:46 crc kubenswrapper[4867]: I0214 05:07:46.782793 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-849hf" event={"ID":"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913","Type":"ContainerStarted","Data":"dafc13745014642f6bd9d9412ddef647b7ac82a22ff94a3893f227e1e4e1bb8d"} Feb 14 05:07:46 crc kubenswrapper[4867]: I0214 05:07:46.786386 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 05:07:47 crc kubenswrapper[4867]: I0214 05:07:47.797395 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-849hf" event={"ID":"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913","Type":"ContainerStarted","Data":"1a1f33a026be29ec895d79cdceabf3f96f8e193872c0565e786f286f62513737"} Feb 14 05:07:53 crc kubenswrapper[4867]: I0214 05:07:53.856835 4867 generic.go:334] "Generic (PLEG): container finished" podID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerID="1a1f33a026be29ec895d79cdceabf3f96f8e193872c0565e786f286f62513737" exitCode=0 Feb 14 05:07:53 crc kubenswrapper[4867]: I0214 05:07:53.856945 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-849hf" event={"ID":"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913","Type":"ContainerDied","Data":"1a1f33a026be29ec895d79cdceabf3f96f8e193872c0565e786f286f62513737"} Feb 14 05:07:54 crc kubenswrapper[4867]: I0214 05:07:54.871324 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-849hf" event={"ID":"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913","Type":"ContainerStarted","Data":"e597dfd95c0e082f7c06168f6200f691b69f3d9758280a3b64ceb7062e323750"} Feb 14 05:07:54 crc kubenswrapper[4867]: I0214 05:07:54.901702 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-849hf" podStartSLOduration=3.462764642 podStartE2EDuration="10.901679183s" podCreationTimestamp="2026-02-14 05:07:44 +0000 UTC" firstStartedPulling="2026-02-14 05:07:46.786161684 +0000 UTC m=+3498.867098998" lastFinishedPulling="2026-02-14 05:07:54.225076225 +0000 UTC m=+3506.306013539" observedRunningTime="2026-02-14 05:07:54.895140051 +0000 UTC m=+3506.976077375" watchObservedRunningTime="2026-02-14 05:07:54.901679183 +0000 UTC m=+3506.982616507" Feb 14 05:07:55 crc kubenswrapper[4867]: I0214 05:07:55.155606 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:07:55 crc kubenswrapper[4867]: I0214 05:07:55.156107 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:07:56 crc kubenswrapper[4867]: I0214 05:07:56.212475 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-849hf" podUID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerName="registry-server" probeResult="failure" output=< Feb 14 05:07:56 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:07:56 crc kubenswrapper[4867]: > Feb 14 05:08:03 crc kubenswrapper[4867]: I0214 05:08:03.231175 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tq9n4"] Feb 14 05:08:03 crc kubenswrapper[4867]: I0214 05:08:03.236146 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:03 crc kubenswrapper[4867]: I0214 05:08:03.250093 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tq9n4"] Feb 14 05:08:03 crc kubenswrapper[4867]: I0214 05:08:03.326964 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a1135a-8f12-45c1-95f2-b7892a0533bf-catalog-content\") pod \"certified-operators-tq9n4\" (UID: \"61a1135a-8f12-45c1-95f2-b7892a0533bf\") " pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:03 crc kubenswrapper[4867]: I0214 05:08:03.329190 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a1135a-8f12-45c1-95f2-b7892a0533bf-utilities\") pod \"certified-operators-tq9n4\" (UID: \"61a1135a-8f12-45c1-95f2-b7892a0533bf\") " pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:03 crc kubenswrapper[4867]: I0214 05:08:03.329525 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2h7j\" (UniqueName: \"kubernetes.io/projected/61a1135a-8f12-45c1-95f2-b7892a0533bf-kube-api-access-b2h7j\") pod \"certified-operators-tq9n4\" (UID: \"61a1135a-8f12-45c1-95f2-b7892a0533bf\") " pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:03 crc kubenswrapper[4867]: I0214 05:08:03.433646 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a1135a-8f12-45c1-95f2-b7892a0533bf-catalog-content\") pod \"certified-operators-tq9n4\" (UID: \"61a1135a-8f12-45c1-95f2-b7892a0533bf\") " pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:03 crc kubenswrapper[4867]: I0214 05:08:03.434006 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a1135a-8f12-45c1-95f2-b7892a0533bf-utilities\") pod \"certified-operators-tq9n4\" (UID: \"61a1135a-8f12-45c1-95f2-b7892a0533bf\") " pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:03 crc kubenswrapper[4867]: I0214 05:08:03.434210 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2h7j\" (UniqueName: \"kubernetes.io/projected/61a1135a-8f12-45c1-95f2-b7892a0533bf-kube-api-access-b2h7j\") pod \"certified-operators-tq9n4\" (UID: \"61a1135a-8f12-45c1-95f2-b7892a0533bf\") " pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:03 crc kubenswrapper[4867]: I0214 05:08:03.434272 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a1135a-8f12-45c1-95f2-b7892a0533bf-catalog-content\") pod \"certified-operators-tq9n4\" (UID: \"61a1135a-8f12-45c1-95f2-b7892a0533bf\") " pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:03 crc kubenswrapper[4867]: I0214 05:08:03.434584 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a1135a-8f12-45c1-95f2-b7892a0533bf-utilities\") pod \"certified-operators-tq9n4\" (UID: \"61a1135a-8f12-45c1-95f2-b7892a0533bf\") " pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:03 crc kubenswrapper[4867]: I0214 05:08:03.479426 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2h7j\" (UniqueName: \"kubernetes.io/projected/61a1135a-8f12-45c1-95f2-b7892a0533bf-kube-api-access-b2h7j\") pod \"certified-operators-tq9n4\" (UID: \"61a1135a-8f12-45c1-95f2-b7892a0533bf\") " pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:03 crc kubenswrapper[4867]: I0214 05:08:03.574260 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:04 crc kubenswrapper[4867]: I0214 05:08:04.258861 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tq9n4"] Feb 14 05:08:05 crc kubenswrapper[4867]: I0214 05:08:05.279158 4867 generic.go:334] "Generic (PLEG): container finished" podID="61a1135a-8f12-45c1-95f2-b7892a0533bf" containerID="e05129467c11e060aff7a1a17b25c836377e0d3898482df922fa8384386b3fbf" exitCode=0 Feb 14 05:08:05 crc kubenswrapper[4867]: I0214 05:08:05.279487 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq9n4" event={"ID":"61a1135a-8f12-45c1-95f2-b7892a0533bf","Type":"ContainerDied","Data":"e05129467c11e060aff7a1a17b25c836377e0d3898482df922fa8384386b3fbf"} Feb 14 05:08:05 crc kubenswrapper[4867]: I0214 05:08:05.279540 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq9n4" event={"ID":"61a1135a-8f12-45c1-95f2-b7892a0533bf","Type":"ContainerStarted","Data":"092ed30559d49b05e26166d7bc8674d52cd854597a2573bcd4d145dc2358a4ed"} Feb 14 05:08:06 crc kubenswrapper[4867]: I0214 05:08:06.270409 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-849hf" podUID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerName="registry-server" probeResult="failure" output=< Feb 14 05:08:06 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:08:06 crc kubenswrapper[4867]: > Feb 14 05:08:09 crc kubenswrapper[4867]: I0214 05:08:09.332557 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq9n4" event={"ID":"61a1135a-8f12-45c1-95f2-b7892a0533bf","Type":"ContainerStarted","Data":"6e8583e981e98adc699d970a530bd1a566e8915d3232bc5dd40842eb4e1c21a0"} Feb 14 05:08:11 crc kubenswrapper[4867]: I0214 05:08:11.615378 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r5spn"] Feb 14 05:08:11 crc kubenswrapper[4867]: I0214 05:08:11.619036 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:11 crc kubenswrapper[4867]: I0214 05:08:11.636944 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r5spn"] Feb 14 05:08:11 crc kubenswrapper[4867]: I0214 05:08:11.695174 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkqrj\" (UniqueName: \"kubernetes.io/projected/1a9e54e7-1fab-4191-b99b-b976ff519072-kube-api-access-xkqrj\") pod \"community-operators-r5spn\" (UID: \"1a9e54e7-1fab-4191-b99b-b976ff519072\") " pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:11 crc kubenswrapper[4867]: I0214 05:08:11.695250 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a9e54e7-1fab-4191-b99b-b976ff519072-catalog-content\") pod \"community-operators-r5spn\" (UID: \"1a9e54e7-1fab-4191-b99b-b976ff519072\") " pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:11 crc kubenswrapper[4867]: I0214 05:08:11.695445 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a9e54e7-1fab-4191-b99b-b976ff519072-utilities\") pod \"community-operators-r5spn\" (UID: \"1a9e54e7-1fab-4191-b99b-b976ff519072\") " pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:11 crc kubenswrapper[4867]: I0214 05:08:11.797869 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkqrj\" (UniqueName: \"kubernetes.io/projected/1a9e54e7-1fab-4191-b99b-b976ff519072-kube-api-access-xkqrj\") pod \"community-operators-r5spn\" (UID: \"1a9e54e7-1fab-4191-b99b-b976ff519072\") " pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:11 crc kubenswrapper[4867]: I0214 05:08:11.797978 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a9e54e7-1fab-4191-b99b-b976ff519072-catalog-content\") pod \"community-operators-r5spn\" (UID: \"1a9e54e7-1fab-4191-b99b-b976ff519072\") " pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:11 crc kubenswrapper[4867]: I0214 05:08:11.798089 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a9e54e7-1fab-4191-b99b-b976ff519072-utilities\") pod \"community-operators-r5spn\" (UID: \"1a9e54e7-1fab-4191-b99b-b976ff519072\") " pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:11 crc kubenswrapper[4867]: I0214 05:08:11.798457 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a9e54e7-1fab-4191-b99b-b976ff519072-catalog-content\") pod \"community-operators-r5spn\" (UID: \"1a9e54e7-1fab-4191-b99b-b976ff519072\") " pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:11 crc kubenswrapper[4867]: I0214 05:08:11.798760 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a9e54e7-1fab-4191-b99b-b976ff519072-utilities\") pod \"community-operators-r5spn\" (UID: \"1a9e54e7-1fab-4191-b99b-b976ff519072\") " pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:11 crc kubenswrapper[4867]: I0214 05:08:11.830122 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkqrj\" (UniqueName: \"kubernetes.io/projected/1a9e54e7-1fab-4191-b99b-b976ff519072-kube-api-access-xkqrj\") pod \"community-operators-r5spn\" (UID: \"1a9e54e7-1fab-4191-b99b-b976ff519072\") " pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:11 crc kubenswrapper[4867]: I0214 05:08:11.946345 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:12 crc kubenswrapper[4867]: I0214 05:08:12.576088 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r5spn"] Feb 14 05:08:12 crc kubenswrapper[4867]: I0214 05:08:12.611597 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9fzdp"] Feb 14 05:08:12 crc kubenswrapper[4867]: I0214 05:08:12.614062 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:12 crc kubenswrapper[4867]: I0214 05:08:12.626568 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9fzdp"] Feb 14 05:08:12 crc kubenswrapper[4867]: I0214 05:08:12.720967 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5smr\" (UniqueName: \"kubernetes.io/projected/140ec2e6-ad78-48a9-b040-c957a66a3455-kube-api-access-b5smr\") pod \"redhat-marketplace-9fzdp\" (UID: \"140ec2e6-ad78-48a9-b040-c957a66a3455\") " pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:12 crc kubenswrapper[4867]: I0214 05:08:12.721095 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/140ec2e6-ad78-48a9-b040-c957a66a3455-catalog-content\") pod \"redhat-marketplace-9fzdp\" (UID: \"140ec2e6-ad78-48a9-b040-c957a66a3455\") " pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:12 crc kubenswrapper[4867]: I0214 05:08:12.721214 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/140ec2e6-ad78-48a9-b040-c957a66a3455-utilities\") pod \"redhat-marketplace-9fzdp\" (UID: \"140ec2e6-ad78-48a9-b040-c957a66a3455\") " pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:12 crc kubenswrapper[4867]: I0214 05:08:12.823044 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/140ec2e6-ad78-48a9-b040-c957a66a3455-catalog-content\") pod \"redhat-marketplace-9fzdp\" (UID: \"140ec2e6-ad78-48a9-b040-c957a66a3455\") " pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:12 crc kubenswrapper[4867]: I0214 05:08:12.823138 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/140ec2e6-ad78-48a9-b040-c957a66a3455-utilities\") pod \"redhat-marketplace-9fzdp\" (UID: \"140ec2e6-ad78-48a9-b040-c957a66a3455\") " pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:12 crc kubenswrapper[4867]: I0214 05:08:12.823300 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5smr\" (UniqueName: \"kubernetes.io/projected/140ec2e6-ad78-48a9-b040-c957a66a3455-kube-api-access-b5smr\") pod \"redhat-marketplace-9fzdp\" (UID: \"140ec2e6-ad78-48a9-b040-c957a66a3455\") " pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:12 crc kubenswrapper[4867]: I0214 05:08:12.823621 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/140ec2e6-ad78-48a9-b040-c957a66a3455-catalog-content\") pod \"redhat-marketplace-9fzdp\" (UID: \"140ec2e6-ad78-48a9-b040-c957a66a3455\") " pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:12 crc kubenswrapper[4867]: I0214 05:08:12.823787 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/140ec2e6-ad78-48a9-b040-c957a66a3455-utilities\") pod \"redhat-marketplace-9fzdp\" (UID: \"140ec2e6-ad78-48a9-b040-c957a66a3455\") " pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:12 crc kubenswrapper[4867]: I0214 05:08:12.849024 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5smr\" (UniqueName: \"kubernetes.io/projected/140ec2e6-ad78-48a9-b040-c957a66a3455-kube-api-access-b5smr\") pod \"redhat-marketplace-9fzdp\" (UID: \"140ec2e6-ad78-48a9-b040-c957a66a3455\") " pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:12 crc kubenswrapper[4867]: I0214 05:08:12.953704 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:13 crc kubenswrapper[4867]: I0214 05:08:13.404856 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5spn" event={"ID":"1a9e54e7-1fab-4191-b99b-b976ff519072","Type":"ContainerStarted","Data":"49932758371f9d334694dab80b28a433bd0d81bb061d82cb8f2927692cb34418"} Feb 14 05:08:13 crc kubenswrapper[4867]: I0214 05:08:13.405192 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5spn" event={"ID":"1a9e54e7-1fab-4191-b99b-b976ff519072","Type":"ContainerStarted","Data":"c79540bf7b44b50c23a2fe282f0135035cd8741d7a1d186a283a56cc1e861b81"} Feb 14 05:08:13 crc kubenswrapper[4867]: I0214 05:08:13.569090 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9fzdp"] Feb 14 05:08:14 crc kubenswrapper[4867]: I0214 05:08:14.415205 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9fzdp" event={"ID":"140ec2e6-ad78-48a9-b040-c957a66a3455","Type":"ContainerStarted","Data":"53a76ae710346cf06fcde776027b3014f0c654a9a0ed4d21f46f194e33a884c4"} Feb 14 05:08:14 crc kubenswrapper[4867]: I0214 05:08:14.418488 4867 generic.go:334] "Generic (PLEG): container finished" podID="1a9e54e7-1fab-4191-b99b-b976ff519072" containerID="49932758371f9d334694dab80b28a433bd0d81bb061d82cb8f2927692cb34418" exitCode=0 Feb 14 05:08:14 crc kubenswrapper[4867]: I0214 05:08:14.418631 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5spn" event={"ID":"1a9e54e7-1fab-4191-b99b-b976ff519072","Type":"ContainerDied","Data":"49932758371f9d334694dab80b28a433bd0d81bb061d82cb8f2927692cb34418"} Feb 14 05:08:15 crc kubenswrapper[4867]: I0214 05:08:15.429784 4867 generic.go:334] "Generic (PLEG): container finished" podID="140ec2e6-ad78-48a9-b040-c957a66a3455" containerID="84104b91d44c2273913c65b18ba62b49ea914033f2b77a0d38858fd584db5b02" exitCode=0 Feb 14 05:08:15 crc kubenswrapper[4867]: I0214 05:08:15.429834 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9fzdp" event={"ID":"140ec2e6-ad78-48a9-b040-c957a66a3455","Type":"ContainerDied","Data":"84104b91d44c2273913c65b18ba62b49ea914033f2b77a0d38858fd584db5b02"} Feb 14 05:08:16 crc kubenswrapper[4867]: I0214 05:08:16.208429 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-849hf" podUID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerName="registry-server" probeResult="failure" output=< Feb 14 05:08:16 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:08:16 crc kubenswrapper[4867]: > Feb 14 05:08:16 crc kubenswrapper[4867]: I0214 05:08:16.454718 4867 generic.go:334] "Generic (PLEG): container finished" podID="61a1135a-8f12-45c1-95f2-b7892a0533bf" containerID="6e8583e981e98adc699d970a530bd1a566e8915d3232bc5dd40842eb4e1c21a0" exitCode=0 Feb 14 05:08:16 crc kubenswrapper[4867]: I0214 05:08:16.454782 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq9n4" event={"ID":"61a1135a-8f12-45c1-95f2-b7892a0533bf","Type":"ContainerDied","Data":"6e8583e981e98adc699d970a530bd1a566e8915d3232bc5dd40842eb4e1c21a0"} Feb 14 05:08:16 crc kubenswrapper[4867]: I0214 05:08:16.461869 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5spn" event={"ID":"1a9e54e7-1fab-4191-b99b-b976ff519072","Type":"ContainerStarted","Data":"daee3912bfff3bc6d2562589922c61f1a311cce4acc698856ba514100abe8d95"} Feb 14 05:08:16 crc kubenswrapper[4867]: I0214 05:08:16.466076 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9fzdp" event={"ID":"140ec2e6-ad78-48a9-b040-c957a66a3455","Type":"ContainerStarted","Data":"d89b05b6d58f4cd442926058c08bcf9ea79ccff80e84314c7e648619c249a02b"} Feb 14 05:08:19 crc kubenswrapper[4867]: I0214 05:08:19.497384 4867 generic.go:334] "Generic (PLEG): container finished" podID="1a9e54e7-1fab-4191-b99b-b976ff519072" containerID="daee3912bfff3bc6d2562589922c61f1a311cce4acc698856ba514100abe8d95" exitCode=0 Feb 14 05:08:19 crc kubenswrapper[4867]: I0214 05:08:19.497420 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5spn" event={"ID":"1a9e54e7-1fab-4191-b99b-b976ff519072","Type":"ContainerDied","Data":"daee3912bfff3bc6d2562589922c61f1a311cce4acc698856ba514100abe8d95"} Feb 14 05:08:19 crc kubenswrapper[4867]: I0214 05:08:19.502623 4867 generic.go:334] "Generic (PLEG): container finished" podID="140ec2e6-ad78-48a9-b040-c957a66a3455" containerID="d89b05b6d58f4cd442926058c08bcf9ea79ccff80e84314c7e648619c249a02b" exitCode=0 Feb 14 05:08:19 crc kubenswrapper[4867]: I0214 05:08:19.502702 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9fzdp" event={"ID":"140ec2e6-ad78-48a9-b040-c957a66a3455","Type":"ContainerDied","Data":"d89b05b6d58f4cd442926058c08bcf9ea79ccff80e84314c7e648619c249a02b"} Feb 14 05:08:19 crc kubenswrapper[4867]: I0214 05:08:19.507838 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq9n4" event={"ID":"61a1135a-8f12-45c1-95f2-b7892a0533bf","Type":"ContainerStarted","Data":"406624fcb2004e26ee93fe465e74d4df5d2d90c3fb84b0fdbf7f4c7494f9a104"} Feb 14 05:08:19 crc kubenswrapper[4867]: I0214 05:08:19.565415 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tq9n4" podStartSLOduration=4.553566536 podStartE2EDuration="17.56539113s" podCreationTimestamp="2026-02-14 05:08:02 +0000 UTC" firstStartedPulling="2026-02-14 05:08:05.283422604 +0000 UTC m=+3517.364359918" lastFinishedPulling="2026-02-14 05:08:18.295247198 +0000 UTC m=+3530.376184512" observedRunningTime="2026-02-14 05:08:19.553921809 +0000 UTC m=+3531.634859113" watchObservedRunningTime="2026-02-14 05:08:19.56539113 +0000 UTC m=+3531.646328444" Feb 14 05:08:20 crc kubenswrapper[4867]: I0214 05:08:20.525785 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5spn" event={"ID":"1a9e54e7-1fab-4191-b99b-b976ff519072","Type":"ContainerStarted","Data":"698f7f8abfbff13aaa725af14d8f5d2627ab98c98f10f308a3f4c2a0d4d80346"} Feb 14 05:08:20 crc kubenswrapper[4867]: I0214 05:08:20.531983 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9fzdp" event={"ID":"140ec2e6-ad78-48a9-b040-c957a66a3455","Type":"ContainerStarted","Data":"9f1f5d2a852335cd2627fafce6cf77ae91ff86308778c50e32db7793b9f53590"} Feb 14 05:08:20 crc kubenswrapper[4867]: I0214 05:08:20.555039 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r5spn" podStartSLOduration=3.833202197 podStartE2EDuration="9.555016878s" podCreationTimestamp="2026-02-14 05:08:11 +0000 UTC" firstStartedPulling="2026-02-14 05:08:14.42092485 +0000 UTC m=+3526.501862164" lastFinishedPulling="2026-02-14 05:08:20.142739531 +0000 UTC m=+3532.223676845" observedRunningTime="2026-02-14 05:08:20.550345355 +0000 UTC m=+3532.631282669" watchObservedRunningTime="2026-02-14 05:08:20.555016878 +0000 UTC m=+3532.635954192" Feb 14 05:08:20 crc kubenswrapper[4867]: I0214 05:08:20.575878 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9fzdp" podStartSLOduration=4.064465459 podStartE2EDuration="8.575857045s" podCreationTimestamp="2026-02-14 05:08:12 +0000 UTC" firstStartedPulling="2026-02-14 05:08:15.436107808 +0000 UTC m=+3527.517045122" lastFinishedPulling="2026-02-14 05:08:19.947499394 +0000 UTC m=+3532.028436708" observedRunningTime="2026-02-14 05:08:20.571393508 +0000 UTC m=+3532.652330862" watchObservedRunningTime="2026-02-14 05:08:20.575857045 +0000 UTC m=+3532.656794359" Feb 14 05:08:21 crc kubenswrapper[4867]: I0214 05:08:21.947816 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:21 crc kubenswrapper[4867]: I0214 05:08:21.948545 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:22 crc kubenswrapper[4867]: I0214 05:08:22.954811 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:22 crc kubenswrapper[4867]: I0214 05:08:22.956066 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:22 crc kubenswrapper[4867]: I0214 05:08:22.997093 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-r5spn" podUID="1a9e54e7-1fab-4191-b99b-b976ff519072" containerName="registry-server" probeResult="failure" output=< Feb 14 05:08:22 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:08:22 crc kubenswrapper[4867]: > Feb 14 05:08:23 crc kubenswrapper[4867]: I0214 05:08:23.576279 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:23 crc kubenswrapper[4867]: I0214 05:08:23.576344 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:24 crc kubenswrapper[4867]: I0214 05:08:24.004163 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-9fzdp" podUID="140ec2e6-ad78-48a9-b040-c957a66a3455" containerName="registry-server" probeResult="failure" output=< Feb 14 05:08:24 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:08:24 crc kubenswrapper[4867]: > Feb 14 05:08:24 crc kubenswrapper[4867]: I0214 05:08:24.625851 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-tq9n4" podUID="61a1135a-8f12-45c1-95f2-b7892a0533bf" containerName="registry-server" probeResult="failure" output=< Feb 14 05:08:24 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:08:24 crc kubenswrapper[4867]: > Feb 14 05:08:26 crc kubenswrapper[4867]: I0214 05:08:26.202784 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-849hf" podUID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerName="registry-server" probeResult="failure" output=< Feb 14 05:08:26 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:08:26 crc kubenswrapper[4867]: > Feb 14 05:08:33 crc kubenswrapper[4867]: I0214 05:08:33.007191 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-r5spn" podUID="1a9e54e7-1fab-4191-b99b-b976ff519072" containerName="registry-server" probeResult="failure" output=< Feb 14 05:08:33 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:08:33 crc kubenswrapper[4867]: > Feb 14 05:08:33 crc kubenswrapper[4867]: I0214 05:08:33.010014 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:33 crc kubenswrapper[4867]: I0214 05:08:33.061110 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:34 crc kubenswrapper[4867]: I0214 05:08:34.242364 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9fzdp"] Feb 14 05:08:34 crc kubenswrapper[4867]: I0214 05:08:34.620339 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-tq9n4" podUID="61a1135a-8f12-45c1-95f2-b7892a0533bf" containerName="registry-server" probeResult="failure" output=< Feb 14 05:08:34 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:08:34 crc kubenswrapper[4867]: > Feb 14 05:08:34 crc kubenswrapper[4867]: I0214 05:08:34.698139 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9fzdp" podUID="140ec2e6-ad78-48a9-b040-c957a66a3455" containerName="registry-server" containerID="cri-o://9f1f5d2a852335cd2627fafce6cf77ae91ff86308778c50e32db7793b9f53590" gracePeriod=2 Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.313612 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.453601 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5smr\" (UniqueName: \"kubernetes.io/projected/140ec2e6-ad78-48a9-b040-c957a66a3455-kube-api-access-b5smr\") pod \"140ec2e6-ad78-48a9-b040-c957a66a3455\" (UID: \"140ec2e6-ad78-48a9-b040-c957a66a3455\") " Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.453764 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/140ec2e6-ad78-48a9-b040-c957a66a3455-utilities\") pod \"140ec2e6-ad78-48a9-b040-c957a66a3455\" (UID: \"140ec2e6-ad78-48a9-b040-c957a66a3455\") " Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.453969 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/140ec2e6-ad78-48a9-b040-c957a66a3455-catalog-content\") pod \"140ec2e6-ad78-48a9-b040-c957a66a3455\" (UID: \"140ec2e6-ad78-48a9-b040-c957a66a3455\") " Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.454172 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/140ec2e6-ad78-48a9-b040-c957a66a3455-utilities" (OuterVolumeSpecName: "utilities") pod "140ec2e6-ad78-48a9-b040-c957a66a3455" (UID: "140ec2e6-ad78-48a9-b040-c957a66a3455"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.454645 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/140ec2e6-ad78-48a9-b040-c957a66a3455-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.461930 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/140ec2e6-ad78-48a9-b040-c957a66a3455-kube-api-access-b5smr" (OuterVolumeSpecName: "kube-api-access-b5smr") pod "140ec2e6-ad78-48a9-b040-c957a66a3455" (UID: "140ec2e6-ad78-48a9-b040-c957a66a3455"). InnerVolumeSpecName "kube-api-access-b5smr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.479628 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/140ec2e6-ad78-48a9-b040-c957a66a3455-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "140ec2e6-ad78-48a9-b040-c957a66a3455" (UID: "140ec2e6-ad78-48a9-b040-c957a66a3455"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.557869 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/140ec2e6-ad78-48a9-b040-c957a66a3455-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.557926 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5smr\" (UniqueName: \"kubernetes.io/projected/140ec2e6-ad78-48a9-b040-c957a66a3455-kube-api-access-b5smr\") on node \"crc\" DevicePath \"\"" Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.709955 4867 generic.go:334] "Generic (PLEG): container finished" podID="140ec2e6-ad78-48a9-b040-c957a66a3455" containerID="9f1f5d2a852335cd2627fafce6cf77ae91ff86308778c50e32db7793b9f53590" exitCode=0 Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.709999 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9fzdp" event={"ID":"140ec2e6-ad78-48a9-b040-c957a66a3455","Type":"ContainerDied","Data":"9f1f5d2a852335cd2627fafce6cf77ae91ff86308778c50e32db7793b9f53590"} Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.710076 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9fzdp" event={"ID":"140ec2e6-ad78-48a9-b040-c957a66a3455","Type":"ContainerDied","Data":"53a76ae710346cf06fcde776027b3014f0c654a9a0ed4d21f46f194e33a884c4"} Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.710104 4867 scope.go:117] "RemoveContainer" containerID="9f1f5d2a852335cd2627fafce6cf77ae91ff86308778c50e32db7793b9f53590" Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.710031 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9fzdp" Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.738617 4867 scope.go:117] "RemoveContainer" containerID="d89b05b6d58f4cd442926058c08bcf9ea79ccff80e84314c7e648619c249a02b" Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.755938 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9fzdp"] Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.770183 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9fzdp"] Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.778608 4867 scope.go:117] "RemoveContainer" containerID="84104b91d44c2273913c65b18ba62b49ea914033f2b77a0d38858fd584db5b02" Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.831395 4867 scope.go:117] "RemoveContainer" containerID="9f1f5d2a852335cd2627fafce6cf77ae91ff86308778c50e32db7793b9f53590" Feb 14 05:08:35 crc kubenswrapper[4867]: E0214 05:08:35.831843 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f1f5d2a852335cd2627fafce6cf77ae91ff86308778c50e32db7793b9f53590\": container with ID starting with 9f1f5d2a852335cd2627fafce6cf77ae91ff86308778c50e32db7793b9f53590 not found: ID does not exist" containerID="9f1f5d2a852335cd2627fafce6cf77ae91ff86308778c50e32db7793b9f53590" Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.831900 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f1f5d2a852335cd2627fafce6cf77ae91ff86308778c50e32db7793b9f53590"} err="failed to get container status \"9f1f5d2a852335cd2627fafce6cf77ae91ff86308778c50e32db7793b9f53590\": rpc error: code = NotFound desc = could not find container \"9f1f5d2a852335cd2627fafce6cf77ae91ff86308778c50e32db7793b9f53590\": container with ID starting with 9f1f5d2a852335cd2627fafce6cf77ae91ff86308778c50e32db7793b9f53590 not found: ID does not exist" Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.831936 4867 scope.go:117] "RemoveContainer" containerID="d89b05b6d58f4cd442926058c08bcf9ea79ccff80e84314c7e648619c249a02b" Feb 14 05:08:35 crc kubenswrapper[4867]: E0214 05:08:35.832254 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d89b05b6d58f4cd442926058c08bcf9ea79ccff80e84314c7e648619c249a02b\": container with ID starting with d89b05b6d58f4cd442926058c08bcf9ea79ccff80e84314c7e648619c249a02b not found: ID does not exist" containerID="d89b05b6d58f4cd442926058c08bcf9ea79ccff80e84314c7e648619c249a02b" Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.832286 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d89b05b6d58f4cd442926058c08bcf9ea79ccff80e84314c7e648619c249a02b"} err="failed to get container status \"d89b05b6d58f4cd442926058c08bcf9ea79ccff80e84314c7e648619c249a02b\": rpc error: code = NotFound desc = could not find container \"d89b05b6d58f4cd442926058c08bcf9ea79ccff80e84314c7e648619c249a02b\": container with ID starting with d89b05b6d58f4cd442926058c08bcf9ea79ccff80e84314c7e648619c249a02b not found: ID does not exist" Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.832309 4867 scope.go:117] "RemoveContainer" containerID="84104b91d44c2273913c65b18ba62b49ea914033f2b77a0d38858fd584db5b02" Feb 14 05:08:35 crc kubenswrapper[4867]: E0214 05:08:35.832649 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84104b91d44c2273913c65b18ba62b49ea914033f2b77a0d38858fd584db5b02\": container with ID starting with 84104b91d44c2273913c65b18ba62b49ea914033f2b77a0d38858fd584db5b02 not found: ID does not exist" containerID="84104b91d44c2273913c65b18ba62b49ea914033f2b77a0d38858fd584db5b02" Feb 14 05:08:35 crc kubenswrapper[4867]: I0214 05:08:35.832681 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84104b91d44c2273913c65b18ba62b49ea914033f2b77a0d38858fd584db5b02"} err="failed to get container status \"84104b91d44c2273913c65b18ba62b49ea914033f2b77a0d38858fd584db5b02\": rpc error: code = NotFound desc = could not find container \"84104b91d44c2273913c65b18ba62b49ea914033f2b77a0d38858fd584db5b02\": container with ID starting with 84104b91d44c2273913c65b18ba62b49ea914033f2b77a0d38858fd584db5b02 not found: ID does not exist" Feb 14 05:08:36 crc kubenswrapper[4867]: I0214 05:08:36.224951 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-849hf" podUID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerName="registry-server" probeResult="failure" output=< Feb 14 05:08:36 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:08:36 crc kubenswrapper[4867]: > Feb 14 05:08:37 crc kubenswrapper[4867]: I0214 05:08:37.010812 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="140ec2e6-ad78-48a9-b040-c957a66a3455" path="/var/lib/kubelet/pods/140ec2e6-ad78-48a9-b040-c957a66a3455/volumes" Feb 14 05:08:42 crc kubenswrapper[4867]: I0214 05:08:42.008137 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:42 crc kubenswrapper[4867]: I0214 05:08:42.074180 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:43 crc kubenswrapper[4867]: I0214 05:08:43.011121 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r5spn"] Feb 14 05:08:43 crc kubenswrapper[4867]: I0214 05:08:43.630183 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:43 crc kubenswrapper[4867]: I0214 05:08:43.689001 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:43 crc kubenswrapper[4867]: I0214 05:08:43.802181 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r5spn" podUID="1a9e54e7-1fab-4191-b99b-b976ff519072" containerName="registry-server" containerID="cri-o://698f7f8abfbff13aaa725af14d8f5d2627ab98c98f10f308a3f4c2a0d4d80346" gracePeriod=2 Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.357375 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.481954 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a9e54e7-1fab-4191-b99b-b976ff519072-catalog-content\") pod \"1a9e54e7-1fab-4191-b99b-b976ff519072\" (UID: \"1a9e54e7-1fab-4191-b99b-b976ff519072\") " Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.482331 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a9e54e7-1fab-4191-b99b-b976ff519072-utilities\") pod \"1a9e54e7-1fab-4191-b99b-b976ff519072\" (UID: \"1a9e54e7-1fab-4191-b99b-b976ff519072\") " Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.482490 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkqrj\" (UniqueName: \"kubernetes.io/projected/1a9e54e7-1fab-4191-b99b-b976ff519072-kube-api-access-xkqrj\") pod \"1a9e54e7-1fab-4191-b99b-b976ff519072\" (UID: \"1a9e54e7-1fab-4191-b99b-b976ff519072\") " Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.483174 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a9e54e7-1fab-4191-b99b-b976ff519072-utilities" (OuterVolumeSpecName: "utilities") pod "1a9e54e7-1fab-4191-b99b-b976ff519072" (UID: "1a9e54e7-1fab-4191-b99b-b976ff519072"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.497699 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a9e54e7-1fab-4191-b99b-b976ff519072-kube-api-access-xkqrj" (OuterVolumeSpecName: "kube-api-access-xkqrj") pod "1a9e54e7-1fab-4191-b99b-b976ff519072" (UID: "1a9e54e7-1fab-4191-b99b-b976ff519072"). InnerVolumeSpecName "kube-api-access-xkqrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.538792 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a9e54e7-1fab-4191-b99b-b976ff519072-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a9e54e7-1fab-4191-b99b-b976ff519072" (UID: "1a9e54e7-1fab-4191-b99b-b976ff519072"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.585273 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a9e54e7-1fab-4191-b99b-b976ff519072-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.585535 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkqrj\" (UniqueName: \"kubernetes.io/projected/1a9e54e7-1fab-4191-b99b-b976ff519072-kube-api-access-xkqrj\") on node \"crc\" DevicePath \"\"" Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.585623 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a9e54e7-1fab-4191-b99b-b976ff519072-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.815400 4867 generic.go:334] "Generic (PLEG): container finished" podID="1a9e54e7-1fab-4191-b99b-b976ff519072" containerID="698f7f8abfbff13aaa725af14d8f5d2627ab98c98f10f308a3f4c2a0d4d80346" exitCode=0 Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.815490 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r5spn" Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.815487 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5spn" event={"ID":"1a9e54e7-1fab-4191-b99b-b976ff519072","Type":"ContainerDied","Data":"698f7f8abfbff13aaa725af14d8f5d2627ab98c98f10f308a3f4c2a0d4d80346"} Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.815941 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r5spn" event={"ID":"1a9e54e7-1fab-4191-b99b-b976ff519072","Type":"ContainerDied","Data":"c79540bf7b44b50c23a2fe282f0135035cd8741d7a1d186a283a56cc1e861b81"} Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.815960 4867 scope.go:117] "RemoveContainer" containerID="698f7f8abfbff13aaa725af14d8f5d2627ab98c98f10f308a3f4c2a0d4d80346" Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.852557 4867 scope.go:117] "RemoveContainer" containerID="daee3912bfff3bc6d2562589922c61f1a311cce4acc698856ba514100abe8d95" Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.860425 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r5spn"] Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.871317 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r5spn"] Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.881034 4867 scope.go:117] "RemoveContainer" containerID="49932758371f9d334694dab80b28a433bd0d81bb061d82cb8f2927692cb34418" Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.947048 4867 scope.go:117] "RemoveContainer" containerID="698f7f8abfbff13aaa725af14d8f5d2627ab98c98f10f308a3f4c2a0d4d80346" Feb 14 05:08:44 crc kubenswrapper[4867]: E0214 05:08:44.947588 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"698f7f8abfbff13aaa725af14d8f5d2627ab98c98f10f308a3f4c2a0d4d80346\": container with ID starting with 698f7f8abfbff13aaa725af14d8f5d2627ab98c98f10f308a3f4c2a0d4d80346 not found: ID does not exist" containerID="698f7f8abfbff13aaa725af14d8f5d2627ab98c98f10f308a3f4c2a0d4d80346" Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.947720 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"698f7f8abfbff13aaa725af14d8f5d2627ab98c98f10f308a3f4c2a0d4d80346"} err="failed to get container status \"698f7f8abfbff13aaa725af14d8f5d2627ab98c98f10f308a3f4c2a0d4d80346\": rpc error: code = NotFound desc = could not find container \"698f7f8abfbff13aaa725af14d8f5d2627ab98c98f10f308a3f4c2a0d4d80346\": container with ID starting with 698f7f8abfbff13aaa725af14d8f5d2627ab98c98f10f308a3f4c2a0d4d80346 not found: ID does not exist" Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.947813 4867 scope.go:117] "RemoveContainer" containerID="daee3912bfff3bc6d2562589922c61f1a311cce4acc698856ba514100abe8d95" Feb 14 05:08:44 crc kubenswrapper[4867]: E0214 05:08:44.948372 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daee3912bfff3bc6d2562589922c61f1a311cce4acc698856ba514100abe8d95\": container with ID starting with daee3912bfff3bc6d2562589922c61f1a311cce4acc698856ba514100abe8d95 not found: ID does not exist" containerID="daee3912bfff3bc6d2562589922c61f1a311cce4acc698856ba514100abe8d95" Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.948481 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daee3912bfff3bc6d2562589922c61f1a311cce4acc698856ba514100abe8d95"} err="failed to get container status \"daee3912bfff3bc6d2562589922c61f1a311cce4acc698856ba514100abe8d95\": rpc error: code = NotFound desc = could not find container \"daee3912bfff3bc6d2562589922c61f1a311cce4acc698856ba514100abe8d95\": container with ID starting with daee3912bfff3bc6d2562589922c61f1a311cce4acc698856ba514100abe8d95 not found: ID does not exist" Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.948577 4867 scope.go:117] "RemoveContainer" containerID="49932758371f9d334694dab80b28a433bd0d81bb061d82cb8f2927692cb34418" Feb 14 05:08:44 crc kubenswrapper[4867]: E0214 05:08:44.949135 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49932758371f9d334694dab80b28a433bd0d81bb061d82cb8f2927692cb34418\": container with ID starting with 49932758371f9d334694dab80b28a433bd0d81bb061d82cb8f2927692cb34418 not found: ID does not exist" containerID="49932758371f9d334694dab80b28a433bd0d81bb061d82cb8f2927692cb34418" Feb 14 05:08:44 crc kubenswrapper[4867]: I0214 05:08:44.949167 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49932758371f9d334694dab80b28a433bd0d81bb061d82cb8f2927692cb34418"} err="failed to get container status \"49932758371f9d334694dab80b28a433bd0d81bb061d82cb8f2927692cb34418\": rpc error: code = NotFound desc = could not find container \"49932758371f9d334694dab80b28a433bd0d81bb061d82cb8f2927692cb34418\": container with ID starting with 49932758371f9d334694dab80b28a433bd0d81bb061d82cb8f2927692cb34418 not found: ID does not exist" Feb 14 05:08:45 crc kubenswrapper[4867]: I0214 05:08:45.009349 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a9e54e7-1fab-4191-b99b-b976ff519072" path="/var/lib/kubelet/pods/1a9e54e7-1fab-4191-b99b-b976ff519072/volumes" Feb 14 05:08:46 crc kubenswrapper[4867]: I0214 05:08:46.204663 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-849hf" podUID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerName="registry-server" probeResult="failure" output=< Feb 14 05:08:46 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:08:46 crc kubenswrapper[4867]: > Feb 14 05:08:46 crc kubenswrapper[4867]: I0214 05:08:46.803844 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tq9n4"] Feb 14 05:08:46 crc kubenswrapper[4867]: I0214 05:08:46.804448 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tq9n4" podUID="61a1135a-8f12-45c1-95f2-b7892a0533bf" containerName="registry-server" containerID="cri-o://406624fcb2004e26ee93fe465e74d4df5d2d90c3fb84b0fdbf7f4c7494f9a104" gracePeriod=2 Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.319448 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.485838 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a1135a-8f12-45c1-95f2-b7892a0533bf-utilities\") pod \"61a1135a-8f12-45c1-95f2-b7892a0533bf\" (UID: \"61a1135a-8f12-45c1-95f2-b7892a0533bf\") " Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.486349 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2h7j\" (UniqueName: \"kubernetes.io/projected/61a1135a-8f12-45c1-95f2-b7892a0533bf-kube-api-access-b2h7j\") pod \"61a1135a-8f12-45c1-95f2-b7892a0533bf\" (UID: \"61a1135a-8f12-45c1-95f2-b7892a0533bf\") " Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.486534 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a1135a-8f12-45c1-95f2-b7892a0533bf-catalog-content\") pod \"61a1135a-8f12-45c1-95f2-b7892a0533bf\" (UID: \"61a1135a-8f12-45c1-95f2-b7892a0533bf\") " Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.486588 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61a1135a-8f12-45c1-95f2-b7892a0533bf-utilities" (OuterVolumeSpecName: "utilities") pod "61a1135a-8f12-45c1-95f2-b7892a0533bf" (UID: "61a1135a-8f12-45c1-95f2-b7892a0533bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.487522 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a1135a-8f12-45c1-95f2-b7892a0533bf-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.491785 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61a1135a-8f12-45c1-95f2-b7892a0533bf-kube-api-access-b2h7j" (OuterVolumeSpecName: "kube-api-access-b2h7j") pod "61a1135a-8f12-45c1-95f2-b7892a0533bf" (UID: "61a1135a-8f12-45c1-95f2-b7892a0533bf"). InnerVolumeSpecName "kube-api-access-b2h7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.537952 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61a1135a-8f12-45c1-95f2-b7892a0533bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61a1135a-8f12-45c1-95f2-b7892a0533bf" (UID: "61a1135a-8f12-45c1-95f2-b7892a0533bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.589776 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2h7j\" (UniqueName: \"kubernetes.io/projected/61a1135a-8f12-45c1-95f2-b7892a0533bf-kube-api-access-b2h7j\") on node \"crc\" DevicePath \"\"" Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.589816 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a1135a-8f12-45c1-95f2-b7892a0533bf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.854067 4867 generic.go:334] "Generic (PLEG): container finished" podID="61a1135a-8f12-45c1-95f2-b7892a0533bf" containerID="406624fcb2004e26ee93fe465e74d4df5d2d90c3fb84b0fdbf7f4c7494f9a104" exitCode=0 Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.854113 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq9n4" event={"ID":"61a1135a-8f12-45c1-95f2-b7892a0533bf","Type":"ContainerDied","Data":"406624fcb2004e26ee93fe465e74d4df5d2d90c3fb84b0fdbf7f4c7494f9a104"} Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.854141 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq9n4" event={"ID":"61a1135a-8f12-45c1-95f2-b7892a0533bf","Type":"ContainerDied","Data":"092ed30559d49b05e26166d7bc8674d52cd854597a2573bcd4d145dc2358a4ed"} Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.854158 4867 scope.go:117] "RemoveContainer" containerID="406624fcb2004e26ee93fe465e74d4df5d2d90c3fb84b0fdbf7f4c7494f9a104" Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.854186 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tq9n4" Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.895571 4867 scope.go:117] "RemoveContainer" containerID="6e8583e981e98adc699d970a530bd1a566e8915d3232bc5dd40842eb4e1c21a0" Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.901410 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tq9n4"] Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.915450 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tq9n4"] Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.928203 4867 scope.go:117] "RemoveContainer" containerID="e05129467c11e060aff7a1a17b25c836377e0d3898482df922fa8384386b3fbf" Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.980122 4867 scope.go:117] "RemoveContainer" containerID="406624fcb2004e26ee93fe465e74d4df5d2d90c3fb84b0fdbf7f4c7494f9a104" Feb 14 05:08:47 crc kubenswrapper[4867]: E0214 05:08:47.980551 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"406624fcb2004e26ee93fe465e74d4df5d2d90c3fb84b0fdbf7f4c7494f9a104\": container with ID starting with 406624fcb2004e26ee93fe465e74d4df5d2d90c3fb84b0fdbf7f4c7494f9a104 not found: ID does not exist" containerID="406624fcb2004e26ee93fe465e74d4df5d2d90c3fb84b0fdbf7f4c7494f9a104" Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.980593 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"406624fcb2004e26ee93fe465e74d4df5d2d90c3fb84b0fdbf7f4c7494f9a104"} err="failed to get container status \"406624fcb2004e26ee93fe465e74d4df5d2d90c3fb84b0fdbf7f4c7494f9a104\": rpc error: code = NotFound desc = could not find container \"406624fcb2004e26ee93fe465e74d4df5d2d90c3fb84b0fdbf7f4c7494f9a104\": container with ID starting with 406624fcb2004e26ee93fe465e74d4df5d2d90c3fb84b0fdbf7f4c7494f9a104 not found: ID does not exist" Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.980620 4867 scope.go:117] "RemoveContainer" containerID="6e8583e981e98adc699d970a530bd1a566e8915d3232bc5dd40842eb4e1c21a0" Feb 14 05:08:47 crc kubenswrapper[4867]: E0214 05:08:47.980875 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e8583e981e98adc699d970a530bd1a566e8915d3232bc5dd40842eb4e1c21a0\": container with ID starting with 6e8583e981e98adc699d970a530bd1a566e8915d3232bc5dd40842eb4e1c21a0 not found: ID does not exist" containerID="6e8583e981e98adc699d970a530bd1a566e8915d3232bc5dd40842eb4e1c21a0" Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.980900 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e8583e981e98adc699d970a530bd1a566e8915d3232bc5dd40842eb4e1c21a0"} err="failed to get container status \"6e8583e981e98adc699d970a530bd1a566e8915d3232bc5dd40842eb4e1c21a0\": rpc error: code = NotFound desc = could not find container \"6e8583e981e98adc699d970a530bd1a566e8915d3232bc5dd40842eb4e1c21a0\": container with ID starting with 6e8583e981e98adc699d970a530bd1a566e8915d3232bc5dd40842eb4e1c21a0 not found: ID does not exist" Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.980913 4867 scope.go:117] "RemoveContainer" containerID="e05129467c11e060aff7a1a17b25c836377e0d3898482df922fa8384386b3fbf" Feb 14 05:08:47 crc kubenswrapper[4867]: E0214 05:08:47.981187 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e05129467c11e060aff7a1a17b25c836377e0d3898482df922fa8384386b3fbf\": container with ID starting with e05129467c11e060aff7a1a17b25c836377e0d3898482df922fa8384386b3fbf not found: ID does not exist" containerID="e05129467c11e060aff7a1a17b25c836377e0d3898482df922fa8384386b3fbf" Feb 14 05:08:47 crc kubenswrapper[4867]: I0214 05:08:47.981218 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e05129467c11e060aff7a1a17b25c836377e0d3898482df922fa8384386b3fbf"} err="failed to get container status \"e05129467c11e060aff7a1a17b25c836377e0d3898482df922fa8384386b3fbf\": rpc error: code = NotFound desc = could not find container \"e05129467c11e060aff7a1a17b25c836377e0d3898482df922fa8384386b3fbf\": container with ID starting with e05129467c11e060aff7a1a17b25c836377e0d3898482df922fa8384386b3fbf not found: ID does not exist" Feb 14 05:08:49 crc kubenswrapper[4867]: I0214 05:08:49.019771 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61a1135a-8f12-45c1-95f2-b7892a0533bf" path="/var/lib/kubelet/pods/61a1135a-8f12-45c1-95f2-b7892a0533bf/volumes" Feb 14 05:08:55 crc kubenswrapper[4867]: I0214 05:08:55.218855 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:08:55 crc kubenswrapper[4867]: I0214 05:08:55.273355 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:08:55 crc kubenswrapper[4867]: I0214 05:08:55.470321 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-849hf"] Feb 14 05:08:56 crc kubenswrapper[4867]: I0214 05:08:56.954216 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-849hf" podUID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerName="registry-server" containerID="cri-o://e597dfd95c0e082f7c06168f6200f691b69f3d9758280a3b64ceb7062e323750" gracePeriod=2 Feb 14 05:08:57 crc kubenswrapper[4867]: I0214 05:08:57.458084 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:08:57 crc kubenswrapper[4867]: I0214 05:08:57.633100 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-utilities\") pod \"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913\" (UID: \"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913\") " Feb 14 05:08:57 crc kubenswrapper[4867]: I0214 05:08:57.634092 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-utilities" (OuterVolumeSpecName: "utilities") pod "fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" (UID: "fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:08:57 crc kubenswrapper[4867]: I0214 05:08:57.634217 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-catalog-content\") pod \"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913\" (UID: \"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913\") " Feb 14 05:08:57 crc kubenswrapper[4867]: I0214 05:08:57.634316 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwvx\" (UniqueName: \"kubernetes.io/projected/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-kube-api-access-jkwvx\") pod \"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913\" (UID: \"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913\") " Feb 14 05:08:57 crc kubenswrapper[4867]: I0214 05:08:57.635678 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:08:57 crc kubenswrapper[4867]: I0214 05:08:57.641949 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-kube-api-access-jkwvx" (OuterVolumeSpecName: "kube-api-access-jkwvx") pod "fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" (UID: "fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913"). InnerVolumeSpecName "kube-api-access-jkwvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:08:57 crc kubenswrapper[4867]: I0214 05:08:57.738651 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwvx\" (UniqueName: \"kubernetes.io/projected/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-kube-api-access-jkwvx\") on node \"crc\" DevicePath \"\"" Feb 14 05:08:57 crc kubenswrapper[4867]: I0214 05:08:57.762280 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" (UID: "fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:08:57 crc kubenswrapper[4867]: I0214 05:08:57.840494 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:08:57 crc kubenswrapper[4867]: I0214 05:08:57.965986 4867 generic.go:334] "Generic (PLEG): container finished" podID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerID="e597dfd95c0e082f7c06168f6200f691b69f3d9758280a3b64ceb7062e323750" exitCode=0 Feb 14 05:08:57 crc kubenswrapper[4867]: I0214 05:08:57.966045 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-849hf" Feb 14 05:08:57 crc kubenswrapper[4867]: I0214 05:08:57.966061 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-849hf" event={"ID":"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913","Type":"ContainerDied","Data":"e597dfd95c0e082f7c06168f6200f691b69f3d9758280a3b64ceb7062e323750"} Feb 14 05:08:57 crc kubenswrapper[4867]: I0214 05:08:57.966405 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-849hf" event={"ID":"fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913","Type":"ContainerDied","Data":"dafc13745014642f6bd9d9412ddef647b7ac82a22ff94a3893f227e1e4e1bb8d"} Feb 14 05:08:57 crc kubenswrapper[4867]: I0214 05:08:57.966429 4867 scope.go:117] "RemoveContainer" containerID="e597dfd95c0e082f7c06168f6200f691b69f3d9758280a3b64ceb7062e323750" Feb 14 05:08:57 crc kubenswrapper[4867]: I0214 05:08:57.989415 4867 scope.go:117] "RemoveContainer" containerID="1a1f33a026be29ec895d79cdceabf3f96f8e193872c0565e786f286f62513737" Feb 14 05:08:58 crc kubenswrapper[4867]: I0214 05:08:58.005502 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-849hf"] Feb 14 05:08:58 crc kubenswrapper[4867]: I0214 05:08:58.016713 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-849hf"] Feb 14 05:08:58 crc kubenswrapper[4867]: I0214 05:08:58.031257 4867 scope.go:117] "RemoveContainer" containerID="531118b5698c29e4c554c835ddc5e56e0cb2165c80336c4df5e447f587f66a36" Feb 14 05:08:58 crc kubenswrapper[4867]: I0214 05:08:58.090946 4867 scope.go:117] "RemoveContainer" containerID="e597dfd95c0e082f7c06168f6200f691b69f3d9758280a3b64ceb7062e323750" Feb 14 05:08:58 crc kubenswrapper[4867]: E0214 05:08:58.091429 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e597dfd95c0e082f7c06168f6200f691b69f3d9758280a3b64ceb7062e323750\": container with ID starting with e597dfd95c0e082f7c06168f6200f691b69f3d9758280a3b64ceb7062e323750 not found: ID does not exist" containerID="e597dfd95c0e082f7c06168f6200f691b69f3d9758280a3b64ceb7062e323750" Feb 14 05:08:58 crc kubenswrapper[4867]: I0214 05:08:58.091461 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e597dfd95c0e082f7c06168f6200f691b69f3d9758280a3b64ceb7062e323750"} err="failed to get container status \"e597dfd95c0e082f7c06168f6200f691b69f3d9758280a3b64ceb7062e323750\": rpc error: code = NotFound desc = could not find container \"e597dfd95c0e082f7c06168f6200f691b69f3d9758280a3b64ceb7062e323750\": container with ID starting with e597dfd95c0e082f7c06168f6200f691b69f3d9758280a3b64ceb7062e323750 not found: ID does not exist" Feb 14 05:08:58 crc kubenswrapper[4867]: I0214 05:08:58.091486 4867 scope.go:117] "RemoveContainer" containerID="1a1f33a026be29ec895d79cdceabf3f96f8e193872c0565e786f286f62513737" Feb 14 05:08:58 crc kubenswrapper[4867]: E0214 05:08:58.091856 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a1f33a026be29ec895d79cdceabf3f96f8e193872c0565e786f286f62513737\": container with ID starting with 1a1f33a026be29ec895d79cdceabf3f96f8e193872c0565e786f286f62513737 not found: ID does not exist" containerID="1a1f33a026be29ec895d79cdceabf3f96f8e193872c0565e786f286f62513737" Feb 14 05:08:58 crc kubenswrapper[4867]: I0214 05:08:58.091888 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a1f33a026be29ec895d79cdceabf3f96f8e193872c0565e786f286f62513737"} err="failed to get container status \"1a1f33a026be29ec895d79cdceabf3f96f8e193872c0565e786f286f62513737\": rpc error: code = NotFound desc = could not find container \"1a1f33a026be29ec895d79cdceabf3f96f8e193872c0565e786f286f62513737\": container with ID starting with 1a1f33a026be29ec895d79cdceabf3f96f8e193872c0565e786f286f62513737 not found: ID does not exist" Feb 14 05:08:58 crc kubenswrapper[4867]: I0214 05:08:58.091907 4867 scope.go:117] "RemoveContainer" containerID="531118b5698c29e4c554c835ddc5e56e0cb2165c80336c4df5e447f587f66a36" Feb 14 05:08:58 crc kubenswrapper[4867]: E0214 05:08:58.092143 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"531118b5698c29e4c554c835ddc5e56e0cb2165c80336c4df5e447f587f66a36\": container with ID starting with 531118b5698c29e4c554c835ddc5e56e0cb2165c80336c4df5e447f587f66a36 not found: ID does not exist" containerID="531118b5698c29e4c554c835ddc5e56e0cb2165c80336c4df5e447f587f66a36" Feb 14 05:08:58 crc kubenswrapper[4867]: I0214 05:08:58.092166 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"531118b5698c29e4c554c835ddc5e56e0cb2165c80336c4df5e447f587f66a36"} err="failed to get container status \"531118b5698c29e4c554c835ddc5e56e0cb2165c80336c4df5e447f587f66a36\": rpc error: code = NotFound desc = could not find container \"531118b5698c29e4c554c835ddc5e56e0cb2165c80336c4df5e447f587f66a36\": container with ID starting with 531118b5698c29e4c554c835ddc5e56e0cb2165c80336c4df5e447f587f66a36 not found: ID does not exist" Feb 14 05:08:59 crc kubenswrapper[4867]: I0214 05:08:59.009329 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" path="/var/lib/kubelet/pods/fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913/volumes" Feb 14 05:09:31 crc kubenswrapper[4867]: I0214 05:09:31.250965 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:09:31 crc kubenswrapper[4867]: I0214 05:09:31.251658 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:10:01 crc kubenswrapper[4867]: I0214 05:10:01.251634 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:10:01 crc kubenswrapper[4867]: I0214 05:10:01.252645 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:10:31 crc kubenswrapper[4867]: I0214 05:10:31.251148 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:10:31 crc kubenswrapper[4867]: I0214 05:10:31.251726 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:10:31 crc kubenswrapper[4867]: I0214 05:10:31.251773 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 05:10:31 crc kubenswrapper[4867]: I0214 05:10:31.252771 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 05:10:31 crc kubenswrapper[4867]: I0214 05:10:31.252828 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" gracePeriod=600 Feb 14 05:10:31 crc kubenswrapper[4867]: E0214 05:10:31.373267 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:10:32 crc kubenswrapper[4867]: I0214 05:10:32.010439 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" exitCode=0 Feb 14 05:10:32 crc kubenswrapper[4867]: I0214 05:10:32.010536 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025"} Feb 14 05:10:32 crc kubenswrapper[4867]: I0214 05:10:32.010864 4867 scope.go:117] "RemoveContainer" containerID="863d05e2c2e5d1963a43470517034f45e340fcf76621f87d3a0804ee07159c7e" Feb 14 05:10:32 crc kubenswrapper[4867]: I0214 05:10:32.011724 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:10:32 crc kubenswrapper[4867]: E0214 05:10:32.012023 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:10:44 crc kubenswrapper[4867]: I0214 05:10:44.997175 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:10:44 crc kubenswrapper[4867]: E0214 05:10:44.998068 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:10:55 crc kubenswrapper[4867]: I0214 05:10:55.998282 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:10:56 crc kubenswrapper[4867]: E0214 05:10:55.999808 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:11:09 crc kubenswrapper[4867]: I0214 05:11:09.997887 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:11:09 crc kubenswrapper[4867]: E0214 05:11:09.998800 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:11:22 crc kubenswrapper[4867]: I0214 05:11:22.998560 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:11:23 crc kubenswrapper[4867]: E0214 05:11:22.999472 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:11:35 crc kubenswrapper[4867]: I0214 05:11:35.997635 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:11:36 crc kubenswrapper[4867]: E0214 05:11:36.000932 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:11:47 crc kubenswrapper[4867]: I0214 05:11:47.998448 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:11:48 crc kubenswrapper[4867]: E0214 05:11:47.999151 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:12:02 crc kubenswrapper[4867]: I0214 05:12:01.998319 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:12:02 crc kubenswrapper[4867]: E0214 05:12:02.003776 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:12:13 crc kubenswrapper[4867]: I0214 05:12:13.997544 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:12:13 crc kubenswrapper[4867]: E0214 05:12:13.999571 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:12:29 crc kubenswrapper[4867]: I0214 05:12:29.007454 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:12:29 crc kubenswrapper[4867]: E0214 05:12:29.016414 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:12:42 crc kubenswrapper[4867]: I0214 05:12:42.998328 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:12:43 crc kubenswrapper[4867]: E0214 05:12:42.999119 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:12:55 crc kubenswrapper[4867]: I0214 05:12:55.998654 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:12:56 crc kubenswrapper[4867]: E0214 05:12:55.999379 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:13:09 crc kubenswrapper[4867]: I0214 05:13:09.997570 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:13:09 crc kubenswrapper[4867]: E0214 05:13:09.999786 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:13:23 crc kubenswrapper[4867]: I0214 05:13:23.997878 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:13:23 crc kubenswrapper[4867]: E0214 05:13:23.998696 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:13:35 crc kubenswrapper[4867]: I0214 05:13:35.997546 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:13:35 crc kubenswrapper[4867]: E0214 05:13:35.998347 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:13:46 crc kubenswrapper[4867]: I0214 05:13:46.997576 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:13:46 crc kubenswrapper[4867]: E0214 05:13:46.998245 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:14:01 crc kubenswrapper[4867]: I0214 05:14:01.002534 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:14:01 crc kubenswrapper[4867]: E0214 05:14:01.003225 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:14:13 crc kubenswrapper[4867]: I0214 05:14:13.998340 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:14:14 crc kubenswrapper[4867]: E0214 05:14:13.999397 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:14:26 crc kubenswrapper[4867]: I0214 05:14:26.997709 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:14:26 crc kubenswrapper[4867]: E0214 05:14:26.998360 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:14:40 crc kubenswrapper[4867]: I0214 05:14:40.997888 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:14:40 crc kubenswrapper[4867]: E0214 05:14:40.998792 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:14:53 crc kubenswrapper[4867]: I0214 05:14:53.004081 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:14:53 crc kubenswrapper[4867]: E0214 05:14:53.004994 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.188293 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924"] Feb 14 05:15:00 crc kubenswrapper[4867]: E0214 05:15:00.189851 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a1135a-8f12-45c1-95f2-b7892a0533bf" containerName="extract-utilities" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.189868 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a1135a-8f12-45c1-95f2-b7892a0533bf" containerName="extract-utilities" Feb 14 05:15:00 crc kubenswrapper[4867]: E0214 05:15:00.189881 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a1135a-8f12-45c1-95f2-b7892a0533bf" containerName="extract-content" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.189887 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a1135a-8f12-45c1-95f2-b7892a0533bf" containerName="extract-content" Feb 14 05:15:00 crc kubenswrapper[4867]: E0214 05:15:00.189895 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a9e54e7-1fab-4191-b99b-b976ff519072" containerName="extract-utilities" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.189901 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9e54e7-1fab-4191-b99b-b976ff519072" containerName="extract-utilities" Feb 14 05:15:00 crc kubenswrapper[4867]: E0214 05:15:00.189933 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a1135a-8f12-45c1-95f2-b7892a0533bf" containerName="registry-server" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.189939 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a1135a-8f12-45c1-95f2-b7892a0533bf" containerName="registry-server" Feb 14 05:15:00 crc kubenswrapper[4867]: E0214 05:15:00.189960 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a9e54e7-1fab-4191-b99b-b976ff519072" containerName="extract-content" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.189967 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9e54e7-1fab-4191-b99b-b976ff519072" containerName="extract-content" Feb 14 05:15:00 crc kubenswrapper[4867]: E0214 05:15:00.189979 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerName="extract-content" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.189985 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerName="extract-content" Feb 14 05:15:00 crc kubenswrapper[4867]: E0214 05:15:00.190008 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a9e54e7-1fab-4191-b99b-b976ff519072" containerName="registry-server" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.190015 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9e54e7-1fab-4191-b99b-b976ff519072" containerName="registry-server" Feb 14 05:15:00 crc kubenswrapper[4867]: E0214 05:15:00.190031 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="140ec2e6-ad78-48a9-b040-c957a66a3455" containerName="extract-utilities" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.190038 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="140ec2e6-ad78-48a9-b040-c957a66a3455" containerName="extract-utilities" Feb 14 05:15:00 crc kubenswrapper[4867]: E0214 05:15:00.190050 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="140ec2e6-ad78-48a9-b040-c957a66a3455" containerName="registry-server" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.190055 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="140ec2e6-ad78-48a9-b040-c957a66a3455" containerName="registry-server" Feb 14 05:15:00 crc kubenswrapper[4867]: E0214 05:15:00.190068 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="140ec2e6-ad78-48a9-b040-c957a66a3455" containerName="extract-content" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.190074 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="140ec2e6-ad78-48a9-b040-c957a66a3455" containerName="extract-content" Feb 14 05:15:00 crc kubenswrapper[4867]: E0214 05:15:00.190092 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerName="extract-utilities" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.190099 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerName="extract-utilities" Feb 14 05:15:00 crc kubenswrapper[4867]: E0214 05:15:00.190117 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerName="registry-server" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.190126 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerName="registry-server" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.190368 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd6a5a7a-1d38-4bb2-9691-0cd6f85c9913" containerName="registry-server" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.190382 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="61a1135a-8f12-45c1-95f2-b7892a0533bf" containerName="registry-server" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.190416 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="140ec2e6-ad78-48a9-b040-c957a66a3455" containerName="registry-server" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.190434 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a9e54e7-1fab-4191-b99b-b976ff519072" containerName="registry-server" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.191801 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.195199 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.195288 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.205367 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924"] Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.295943 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt8z6\" (UniqueName: \"kubernetes.io/projected/4d32d646-2d3a-40db-acb7-a2c9e410c655-kube-api-access-mt8z6\") pod \"collect-profiles-29517435-sp924\" (UID: \"4d32d646-2d3a-40db-acb7-a2c9e410c655\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.296138 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d32d646-2d3a-40db-acb7-a2c9e410c655-secret-volume\") pod \"collect-profiles-29517435-sp924\" (UID: \"4d32d646-2d3a-40db-acb7-a2c9e410c655\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.296160 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d32d646-2d3a-40db-acb7-a2c9e410c655-config-volume\") pod \"collect-profiles-29517435-sp924\" (UID: \"4d32d646-2d3a-40db-acb7-a2c9e410c655\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.402591 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d32d646-2d3a-40db-acb7-a2c9e410c655-secret-volume\") pod \"collect-profiles-29517435-sp924\" (UID: \"4d32d646-2d3a-40db-acb7-a2c9e410c655\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.402672 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d32d646-2d3a-40db-acb7-a2c9e410c655-config-volume\") pod \"collect-profiles-29517435-sp924\" (UID: \"4d32d646-2d3a-40db-acb7-a2c9e410c655\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.402867 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt8z6\" (UniqueName: \"kubernetes.io/projected/4d32d646-2d3a-40db-acb7-a2c9e410c655-kube-api-access-mt8z6\") pod \"collect-profiles-29517435-sp924\" (UID: \"4d32d646-2d3a-40db-acb7-a2c9e410c655\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" Feb 14 05:15:00 crc kubenswrapper[4867]: I0214 05:15:00.403806 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d32d646-2d3a-40db-acb7-a2c9e410c655-config-volume\") pod \"collect-profiles-29517435-sp924\" (UID: \"4d32d646-2d3a-40db-acb7-a2c9e410c655\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" Feb 14 05:15:01 crc kubenswrapper[4867]: I0214 05:15:01.061021 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt8z6\" (UniqueName: \"kubernetes.io/projected/4d32d646-2d3a-40db-acb7-a2c9e410c655-kube-api-access-mt8z6\") pod \"collect-profiles-29517435-sp924\" (UID: \"4d32d646-2d3a-40db-acb7-a2c9e410c655\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" Feb 14 05:15:01 crc kubenswrapper[4867]: I0214 05:15:01.061460 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d32d646-2d3a-40db-acb7-a2c9e410c655-secret-volume\") pod \"collect-profiles-29517435-sp924\" (UID: \"4d32d646-2d3a-40db-acb7-a2c9e410c655\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" Feb 14 05:15:01 crc kubenswrapper[4867]: I0214 05:15:01.124146 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" Feb 14 05:15:01 crc kubenswrapper[4867]: I0214 05:15:01.641985 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924"] Feb 14 05:15:01 crc kubenswrapper[4867]: I0214 05:15:01.913663 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" event={"ID":"4d32d646-2d3a-40db-acb7-a2c9e410c655","Type":"ContainerStarted","Data":"57685fa039b788fdc3d04fb1da2849cb66a1a8363710569f8bd5ff77b56239d6"} Feb 14 05:15:01 crc kubenswrapper[4867]: I0214 05:15:01.913990 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" event={"ID":"4d32d646-2d3a-40db-acb7-a2c9e410c655","Type":"ContainerStarted","Data":"f359105f56e5cfa82013265a8942223d6c5a788a74259ac8eae7176b4ebbf7e3"} Feb 14 05:15:01 crc kubenswrapper[4867]: I0214 05:15:01.932843 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" podStartSLOduration=1.9328306290000001 podStartE2EDuration="1.932830629s" podCreationTimestamp="2026-02-14 05:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 05:15:01.932669265 +0000 UTC m=+3934.013606579" watchObservedRunningTime="2026-02-14 05:15:01.932830629 +0000 UTC m=+3934.013767944" Feb 14 05:15:02 crc kubenswrapper[4867]: I0214 05:15:02.929156 4867 generic.go:334] "Generic (PLEG): container finished" podID="4d32d646-2d3a-40db-acb7-a2c9e410c655" containerID="57685fa039b788fdc3d04fb1da2849cb66a1a8363710569f8bd5ff77b56239d6" exitCode=0 Feb 14 05:15:02 crc kubenswrapper[4867]: I0214 05:15:02.929238 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" event={"ID":"4d32d646-2d3a-40db-acb7-a2c9e410c655","Type":"ContainerDied","Data":"57685fa039b788fdc3d04fb1da2849cb66a1a8363710569f8bd5ff77b56239d6"} Feb 14 05:15:04 crc kubenswrapper[4867]: I0214 05:15:04.402258 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" Feb 14 05:15:04 crc kubenswrapper[4867]: I0214 05:15:04.519001 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d32d646-2d3a-40db-acb7-a2c9e410c655-secret-volume\") pod \"4d32d646-2d3a-40db-acb7-a2c9e410c655\" (UID: \"4d32d646-2d3a-40db-acb7-a2c9e410c655\") " Feb 14 05:15:04 crc kubenswrapper[4867]: I0214 05:15:04.519123 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt8z6\" (UniqueName: \"kubernetes.io/projected/4d32d646-2d3a-40db-acb7-a2c9e410c655-kube-api-access-mt8z6\") pod \"4d32d646-2d3a-40db-acb7-a2c9e410c655\" (UID: \"4d32d646-2d3a-40db-acb7-a2c9e410c655\") " Feb 14 05:15:04 crc kubenswrapper[4867]: I0214 05:15:04.519618 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d32d646-2d3a-40db-acb7-a2c9e410c655-config-volume\") pod \"4d32d646-2d3a-40db-acb7-a2c9e410c655\" (UID: \"4d32d646-2d3a-40db-acb7-a2c9e410c655\") " Feb 14 05:15:04 crc kubenswrapper[4867]: I0214 05:15:04.520224 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d32d646-2d3a-40db-acb7-a2c9e410c655-config-volume" (OuterVolumeSpecName: "config-volume") pod "4d32d646-2d3a-40db-acb7-a2c9e410c655" (UID: "4d32d646-2d3a-40db-acb7-a2c9e410c655"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 05:15:04 crc kubenswrapper[4867]: I0214 05:15:04.526811 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d32d646-2d3a-40db-acb7-a2c9e410c655-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4d32d646-2d3a-40db-acb7-a2c9e410c655" (UID: "4d32d646-2d3a-40db-acb7-a2c9e410c655"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:15:04 crc kubenswrapper[4867]: I0214 05:15:04.527994 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d32d646-2d3a-40db-acb7-a2c9e410c655-kube-api-access-mt8z6" (OuterVolumeSpecName: "kube-api-access-mt8z6") pod "4d32d646-2d3a-40db-acb7-a2c9e410c655" (UID: "4d32d646-2d3a-40db-acb7-a2c9e410c655"). InnerVolumeSpecName "kube-api-access-mt8z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:15:04 crc kubenswrapper[4867]: I0214 05:15:04.622210 4867 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d32d646-2d3a-40db-acb7-a2c9e410c655-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 05:15:04 crc kubenswrapper[4867]: I0214 05:15:04.622244 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mt8z6\" (UniqueName: \"kubernetes.io/projected/4d32d646-2d3a-40db-acb7-a2c9e410c655-kube-api-access-mt8z6\") on node \"crc\" DevicePath \"\"" Feb 14 05:15:04 crc kubenswrapper[4867]: I0214 05:15:04.622255 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d32d646-2d3a-40db-acb7-a2c9e410c655-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 05:15:04 crc kubenswrapper[4867]: I0214 05:15:04.718392 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx"] Feb 14 05:15:04 crc kubenswrapper[4867]: I0214 05:15:04.729083 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517390-kwnnx"] Feb 14 05:15:04 crc kubenswrapper[4867]: I0214 05:15:04.948634 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" event={"ID":"4d32d646-2d3a-40db-acb7-a2c9e410c655","Type":"ContainerDied","Data":"f359105f56e5cfa82013265a8942223d6c5a788a74259ac8eae7176b4ebbf7e3"} Feb 14 05:15:04 crc kubenswrapper[4867]: I0214 05:15:04.949156 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f359105f56e5cfa82013265a8942223d6c5a788a74259ac8eae7176b4ebbf7e3" Feb 14 05:15:04 crc kubenswrapper[4867]: I0214 05:15:04.949340 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924" Feb 14 05:15:05 crc kubenswrapper[4867]: I0214 05:15:05.011205 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7c88887-cc0d-4b61-9ccc-e5583c27322f" path="/var/lib/kubelet/pods/f7c88887-cc0d-4b61-9ccc-e5583c27322f/volumes" Feb 14 05:15:07 crc kubenswrapper[4867]: I0214 05:15:07.997688 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:15:07 crc kubenswrapper[4867]: E0214 05:15:07.998461 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:15:16 crc kubenswrapper[4867]: I0214 05:15:16.399279 4867 scope.go:117] "RemoveContainer" containerID="1ad9cf29f8ad6082a18e81d3f3baec01fbc4267f231e524551a2925f597e672d" Feb 14 05:15:21 crc kubenswrapper[4867]: I0214 05:15:21.997726 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:15:21 crc kubenswrapper[4867]: E0214 05:15:21.998470 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:15:35 crc kubenswrapper[4867]: I0214 05:15:35.998337 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:15:36 crc kubenswrapper[4867]: I0214 05:15:36.289468 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"1e3602f7b703c67cfacb5cb1380c16876968a54c75c8bfed3061dc4a8fbe9713"} Feb 14 05:18:01 crc kubenswrapper[4867]: I0214 05:18:01.251016 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:18:01 crc kubenswrapper[4867]: I0214 05:18:01.251630 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:18:31 crc kubenswrapper[4867]: I0214 05:18:31.251044 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:18:31 crc kubenswrapper[4867]: I0214 05:18:31.252624 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:18:41 crc kubenswrapper[4867]: I0214 05:18:41.364486 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sfn52"] Feb 14 05:18:41 crc kubenswrapper[4867]: E0214 05:18:41.365610 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d32d646-2d3a-40db-acb7-a2c9e410c655" containerName="collect-profiles" Feb 14 05:18:41 crc kubenswrapper[4867]: I0214 05:18:41.365627 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d32d646-2d3a-40db-acb7-a2c9e410c655" containerName="collect-profiles" Feb 14 05:18:41 crc kubenswrapper[4867]: I0214 05:18:41.365902 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d32d646-2d3a-40db-acb7-a2c9e410c655" containerName="collect-profiles" Feb 14 05:18:41 crc kubenswrapper[4867]: I0214 05:18:41.367835 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:41 crc kubenswrapper[4867]: I0214 05:18:41.397279 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sfn52"] Feb 14 05:18:41 crc kubenswrapper[4867]: I0214 05:18:41.439340 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffaef1cf-3868-4c30-a1db-f2f0e2305795-utilities\") pod \"certified-operators-sfn52\" (UID: \"ffaef1cf-3868-4c30-a1db-f2f0e2305795\") " pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:41 crc kubenswrapper[4867]: I0214 05:18:41.439568 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65jzr\" (UniqueName: \"kubernetes.io/projected/ffaef1cf-3868-4c30-a1db-f2f0e2305795-kube-api-access-65jzr\") pod \"certified-operators-sfn52\" (UID: \"ffaef1cf-3868-4c30-a1db-f2f0e2305795\") " pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:41 crc kubenswrapper[4867]: I0214 05:18:41.442458 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffaef1cf-3868-4c30-a1db-f2f0e2305795-catalog-content\") pod \"certified-operators-sfn52\" (UID: \"ffaef1cf-3868-4c30-a1db-f2f0e2305795\") " pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:41 crc kubenswrapper[4867]: I0214 05:18:41.545490 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffaef1cf-3868-4c30-a1db-f2f0e2305795-catalog-content\") pod \"certified-operators-sfn52\" (UID: \"ffaef1cf-3868-4c30-a1db-f2f0e2305795\") " pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:41 crc kubenswrapper[4867]: I0214 05:18:41.546111 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffaef1cf-3868-4c30-a1db-f2f0e2305795-catalog-content\") pod \"certified-operators-sfn52\" (UID: \"ffaef1cf-3868-4c30-a1db-f2f0e2305795\") " pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:41 crc kubenswrapper[4867]: I0214 05:18:41.546130 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffaef1cf-3868-4c30-a1db-f2f0e2305795-utilities\") pod \"certified-operators-sfn52\" (UID: \"ffaef1cf-3868-4c30-a1db-f2f0e2305795\") " pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:41 crc kubenswrapper[4867]: I0214 05:18:41.546407 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65jzr\" (UniqueName: \"kubernetes.io/projected/ffaef1cf-3868-4c30-a1db-f2f0e2305795-kube-api-access-65jzr\") pod \"certified-operators-sfn52\" (UID: \"ffaef1cf-3868-4c30-a1db-f2f0e2305795\") " pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:41 crc kubenswrapper[4867]: I0214 05:18:41.546681 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffaef1cf-3868-4c30-a1db-f2f0e2305795-utilities\") pod \"certified-operators-sfn52\" (UID: \"ffaef1cf-3868-4c30-a1db-f2f0e2305795\") " pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:41 crc kubenswrapper[4867]: I0214 05:18:41.568422 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65jzr\" (UniqueName: \"kubernetes.io/projected/ffaef1cf-3868-4c30-a1db-f2f0e2305795-kube-api-access-65jzr\") pod \"certified-operators-sfn52\" (UID: \"ffaef1cf-3868-4c30-a1db-f2f0e2305795\") " pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:41 crc kubenswrapper[4867]: I0214 05:18:41.697927 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:42 crc kubenswrapper[4867]: I0214 05:18:42.313534 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sfn52"] Feb 14 05:18:42 crc kubenswrapper[4867]: W0214 05:18:42.323710 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffaef1cf_3868_4c30_a1db_f2f0e2305795.slice/crio-561dea0fcb8fcf098edac916c2d02df5df7a3f9411632f29686b83622385f99e WatchSource:0}: Error finding container 561dea0fcb8fcf098edac916c2d02df5df7a3f9411632f29686b83622385f99e: Status 404 returned error can't find the container with id 561dea0fcb8fcf098edac916c2d02df5df7a3f9411632f29686b83622385f99e Feb 14 05:18:42 crc kubenswrapper[4867]: I0214 05:18:42.835835 4867 generic.go:334] "Generic (PLEG): container finished" podID="ffaef1cf-3868-4c30-a1db-f2f0e2305795" containerID="3d381218bc5be9c471f4279313529f67a85bd7d11d6f891202c7c8e3b688be41" exitCode=0 Feb 14 05:18:42 crc kubenswrapper[4867]: I0214 05:18:42.835885 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfn52" event={"ID":"ffaef1cf-3868-4c30-a1db-f2f0e2305795","Type":"ContainerDied","Data":"3d381218bc5be9c471f4279313529f67a85bd7d11d6f891202c7c8e3b688be41"} Feb 14 05:18:42 crc kubenswrapper[4867]: I0214 05:18:42.835933 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfn52" event={"ID":"ffaef1cf-3868-4c30-a1db-f2f0e2305795","Type":"ContainerStarted","Data":"561dea0fcb8fcf098edac916c2d02df5df7a3f9411632f29686b83622385f99e"} Feb 14 05:18:42 crc kubenswrapper[4867]: I0214 05:18:42.837925 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 05:18:43 crc kubenswrapper[4867]: I0214 05:18:43.849910 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfn52" event={"ID":"ffaef1cf-3868-4c30-a1db-f2f0e2305795","Type":"ContainerStarted","Data":"ab65aa7384e5ee7a4d08c632a4c36a58f1df873877f1acaf2626c9ba9431eee7"} Feb 14 05:18:45 crc kubenswrapper[4867]: I0214 05:18:45.886802 4867 generic.go:334] "Generic (PLEG): container finished" podID="ffaef1cf-3868-4c30-a1db-f2f0e2305795" containerID="ab65aa7384e5ee7a4d08c632a4c36a58f1df873877f1acaf2626c9ba9431eee7" exitCode=0 Feb 14 05:18:45 crc kubenswrapper[4867]: I0214 05:18:45.886875 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfn52" event={"ID":"ffaef1cf-3868-4c30-a1db-f2f0e2305795","Type":"ContainerDied","Data":"ab65aa7384e5ee7a4d08c632a4c36a58f1df873877f1acaf2626c9ba9431eee7"} Feb 14 05:18:46 crc kubenswrapper[4867]: I0214 05:18:46.900375 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfn52" event={"ID":"ffaef1cf-3868-4c30-a1db-f2f0e2305795","Type":"ContainerStarted","Data":"7405032c933c85d1feef84e2dd428f0653b5798ccfb12c68c957dc6492227356"} Feb 14 05:18:46 crc kubenswrapper[4867]: I0214 05:18:46.936234 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sfn52" podStartSLOduration=2.506869348 podStartE2EDuration="5.936208952s" podCreationTimestamp="2026-02-14 05:18:41 +0000 UTC" firstStartedPulling="2026-02-14 05:18:42.837567138 +0000 UTC m=+4154.918504472" lastFinishedPulling="2026-02-14 05:18:46.266906772 +0000 UTC m=+4158.347844076" observedRunningTime="2026-02-14 05:18:46.925929882 +0000 UTC m=+4159.006867206" watchObservedRunningTime="2026-02-14 05:18:46.936208952 +0000 UTC m=+4159.017146286" Feb 14 05:18:51 crc kubenswrapper[4867]: I0214 05:18:51.698516 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:51 crc kubenswrapper[4867]: I0214 05:18:51.700062 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:51 crc kubenswrapper[4867]: I0214 05:18:51.746528 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:52 crc kubenswrapper[4867]: I0214 05:18:52.826078 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:52 crc kubenswrapper[4867]: I0214 05:18:52.879494 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sfn52"] Feb 14 05:18:53 crc kubenswrapper[4867]: I0214 05:18:53.966705 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sfn52" podUID="ffaef1cf-3868-4c30-a1db-f2f0e2305795" containerName="registry-server" containerID="cri-o://7405032c933c85d1feef84e2dd428f0653b5798ccfb12c68c957dc6492227356" gracePeriod=2 Feb 14 05:18:54 crc kubenswrapper[4867]: I0214 05:18:54.471426 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:54 crc kubenswrapper[4867]: I0214 05:18:54.482481 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65jzr\" (UniqueName: \"kubernetes.io/projected/ffaef1cf-3868-4c30-a1db-f2f0e2305795-kube-api-access-65jzr\") pod \"ffaef1cf-3868-4c30-a1db-f2f0e2305795\" (UID: \"ffaef1cf-3868-4c30-a1db-f2f0e2305795\") " Feb 14 05:18:54 crc kubenswrapper[4867]: I0214 05:18:54.482540 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffaef1cf-3868-4c30-a1db-f2f0e2305795-utilities\") pod \"ffaef1cf-3868-4c30-a1db-f2f0e2305795\" (UID: \"ffaef1cf-3868-4c30-a1db-f2f0e2305795\") " Feb 14 05:18:54 crc kubenswrapper[4867]: I0214 05:18:54.483673 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffaef1cf-3868-4c30-a1db-f2f0e2305795-utilities" (OuterVolumeSpecName: "utilities") pod "ffaef1cf-3868-4c30-a1db-f2f0e2305795" (UID: "ffaef1cf-3868-4c30-a1db-f2f0e2305795"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:18:54 crc kubenswrapper[4867]: I0214 05:18:54.488291 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffaef1cf-3868-4c30-a1db-f2f0e2305795-kube-api-access-65jzr" (OuterVolumeSpecName: "kube-api-access-65jzr") pod "ffaef1cf-3868-4c30-a1db-f2f0e2305795" (UID: "ffaef1cf-3868-4c30-a1db-f2f0e2305795"). InnerVolumeSpecName "kube-api-access-65jzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:18:54 crc kubenswrapper[4867]: I0214 05:18:54.585126 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffaef1cf-3868-4c30-a1db-f2f0e2305795-catalog-content\") pod \"ffaef1cf-3868-4c30-a1db-f2f0e2305795\" (UID: \"ffaef1cf-3868-4c30-a1db-f2f0e2305795\") " Feb 14 05:18:54 crc kubenswrapper[4867]: I0214 05:18:54.590437 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65jzr\" (UniqueName: \"kubernetes.io/projected/ffaef1cf-3868-4c30-a1db-f2f0e2305795-kube-api-access-65jzr\") on node \"crc\" DevicePath \"\"" Feb 14 05:18:54 crc kubenswrapper[4867]: I0214 05:18:54.590478 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffaef1cf-3868-4c30-a1db-f2f0e2305795-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:18:54 crc kubenswrapper[4867]: I0214 05:18:54.640015 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffaef1cf-3868-4c30-a1db-f2f0e2305795-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ffaef1cf-3868-4c30-a1db-f2f0e2305795" (UID: "ffaef1cf-3868-4c30-a1db-f2f0e2305795"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:18:54 crc kubenswrapper[4867]: I0214 05:18:54.693397 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffaef1cf-3868-4c30-a1db-f2f0e2305795-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:18:54 crc kubenswrapper[4867]: I0214 05:18:54.978223 4867 generic.go:334] "Generic (PLEG): container finished" podID="ffaef1cf-3868-4c30-a1db-f2f0e2305795" containerID="7405032c933c85d1feef84e2dd428f0653b5798ccfb12c68c957dc6492227356" exitCode=0 Feb 14 05:18:54 crc kubenswrapper[4867]: I0214 05:18:54.978263 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfn52" event={"ID":"ffaef1cf-3868-4c30-a1db-f2f0e2305795","Type":"ContainerDied","Data":"7405032c933c85d1feef84e2dd428f0653b5798ccfb12c68c957dc6492227356"} Feb 14 05:18:54 crc kubenswrapper[4867]: I0214 05:18:54.978299 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sfn52" event={"ID":"ffaef1cf-3868-4c30-a1db-f2f0e2305795","Type":"ContainerDied","Data":"561dea0fcb8fcf098edac916c2d02df5df7a3f9411632f29686b83622385f99e"} Feb 14 05:18:54 crc kubenswrapper[4867]: I0214 05:18:54.978317 4867 scope.go:117] "RemoveContainer" containerID="7405032c933c85d1feef84e2dd428f0653b5798ccfb12c68c957dc6492227356" Feb 14 05:18:54 crc kubenswrapper[4867]: I0214 05:18:54.980202 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sfn52" Feb 14 05:18:55 crc kubenswrapper[4867]: I0214 05:18:55.017241 4867 scope.go:117] "RemoveContainer" containerID="ab65aa7384e5ee7a4d08c632a4c36a58f1df873877f1acaf2626c9ba9431eee7" Feb 14 05:18:55 crc kubenswrapper[4867]: I0214 05:18:55.032803 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sfn52"] Feb 14 05:18:55 crc kubenswrapper[4867]: I0214 05:18:55.047785 4867 scope.go:117] "RemoveContainer" containerID="3d381218bc5be9c471f4279313529f67a85bd7d11d6f891202c7c8e3b688be41" Feb 14 05:18:55 crc kubenswrapper[4867]: I0214 05:18:55.049009 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sfn52"] Feb 14 05:18:55 crc kubenswrapper[4867]: I0214 05:18:55.101387 4867 scope.go:117] "RemoveContainer" containerID="7405032c933c85d1feef84e2dd428f0653b5798ccfb12c68c957dc6492227356" Feb 14 05:18:55 crc kubenswrapper[4867]: E0214 05:18:55.101875 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7405032c933c85d1feef84e2dd428f0653b5798ccfb12c68c957dc6492227356\": container with ID starting with 7405032c933c85d1feef84e2dd428f0653b5798ccfb12c68c957dc6492227356 not found: ID does not exist" containerID="7405032c933c85d1feef84e2dd428f0653b5798ccfb12c68c957dc6492227356" Feb 14 05:18:55 crc kubenswrapper[4867]: I0214 05:18:55.101917 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7405032c933c85d1feef84e2dd428f0653b5798ccfb12c68c957dc6492227356"} err="failed to get container status \"7405032c933c85d1feef84e2dd428f0653b5798ccfb12c68c957dc6492227356\": rpc error: code = NotFound desc = could not find container \"7405032c933c85d1feef84e2dd428f0653b5798ccfb12c68c957dc6492227356\": container with ID starting with 7405032c933c85d1feef84e2dd428f0653b5798ccfb12c68c957dc6492227356 not found: ID does not exist" Feb 14 05:18:55 crc kubenswrapper[4867]: I0214 05:18:55.101954 4867 scope.go:117] "RemoveContainer" containerID="ab65aa7384e5ee7a4d08c632a4c36a58f1df873877f1acaf2626c9ba9431eee7" Feb 14 05:18:55 crc kubenswrapper[4867]: E0214 05:18:55.102338 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab65aa7384e5ee7a4d08c632a4c36a58f1df873877f1acaf2626c9ba9431eee7\": container with ID starting with ab65aa7384e5ee7a4d08c632a4c36a58f1df873877f1acaf2626c9ba9431eee7 not found: ID does not exist" containerID="ab65aa7384e5ee7a4d08c632a4c36a58f1df873877f1acaf2626c9ba9431eee7" Feb 14 05:18:55 crc kubenswrapper[4867]: I0214 05:18:55.102369 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab65aa7384e5ee7a4d08c632a4c36a58f1df873877f1acaf2626c9ba9431eee7"} err="failed to get container status \"ab65aa7384e5ee7a4d08c632a4c36a58f1df873877f1acaf2626c9ba9431eee7\": rpc error: code = NotFound desc = could not find container \"ab65aa7384e5ee7a4d08c632a4c36a58f1df873877f1acaf2626c9ba9431eee7\": container with ID starting with ab65aa7384e5ee7a4d08c632a4c36a58f1df873877f1acaf2626c9ba9431eee7 not found: ID does not exist" Feb 14 05:18:55 crc kubenswrapper[4867]: I0214 05:18:55.102388 4867 scope.go:117] "RemoveContainer" containerID="3d381218bc5be9c471f4279313529f67a85bd7d11d6f891202c7c8e3b688be41" Feb 14 05:18:55 crc kubenswrapper[4867]: E0214 05:18:55.102682 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d381218bc5be9c471f4279313529f67a85bd7d11d6f891202c7c8e3b688be41\": container with ID starting with 3d381218bc5be9c471f4279313529f67a85bd7d11d6f891202c7c8e3b688be41 not found: ID does not exist" containerID="3d381218bc5be9c471f4279313529f67a85bd7d11d6f891202c7c8e3b688be41" Feb 14 05:18:55 crc kubenswrapper[4867]: I0214 05:18:55.102708 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d381218bc5be9c471f4279313529f67a85bd7d11d6f891202c7c8e3b688be41"} err="failed to get container status \"3d381218bc5be9c471f4279313529f67a85bd7d11d6f891202c7c8e3b688be41\": rpc error: code = NotFound desc = could not find container \"3d381218bc5be9c471f4279313529f67a85bd7d11d6f891202c7c8e3b688be41\": container with ID starting with 3d381218bc5be9c471f4279313529f67a85bd7d11d6f891202c7c8e3b688be41 not found: ID does not exist" Feb 14 05:18:57 crc kubenswrapper[4867]: I0214 05:18:57.009455 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffaef1cf-3868-4c30-a1db-f2f0e2305795" path="/var/lib/kubelet/pods/ffaef1cf-3868-4c30-a1db-f2f0e2305795/volumes" Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.561779 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ccpdl"] Feb 14 05:18:58 crc kubenswrapper[4867]: E0214 05:18:58.562980 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffaef1cf-3868-4c30-a1db-f2f0e2305795" containerName="extract-utilities" Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.562999 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffaef1cf-3868-4c30-a1db-f2f0e2305795" containerName="extract-utilities" Feb 14 05:18:58 crc kubenswrapper[4867]: E0214 05:18:58.563063 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffaef1cf-3868-4c30-a1db-f2f0e2305795" containerName="extract-content" Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.563071 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffaef1cf-3868-4c30-a1db-f2f0e2305795" containerName="extract-content" Feb 14 05:18:58 crc kubenswrapper[4867]: E0214 05:18:58.563095 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffaef1cf-3868-4c30-a1db-f2f0e2305795" containerName="registry-server" Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.563104 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffaef1cf-3868-4c30-a1db-f2f0e2305795" containerName="registry-server" Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.563413 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffaef1cf-3868-4c30-a1db-f2f0e2305795" containerName="registry-server" Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.565834 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.575273 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ccpdl"] Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.599894 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9l2t\" (UniqueName: \"kubernetes.io/projected/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-kube-api-access-d9l2t\") pod \"community-operators-ccpdl\" (UID: \"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728\") " pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.599993 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-catalog-content\") pod \"community-operators-ccpdl\" (UID: \"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728\") " pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.600419 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-utilities\") pod \"community-operators-ccpdl\" (UID: \"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728\") " pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.702157 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9l2t\" (UniqueName: \"kubernetes.io/projected/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-kube-api-access-d9l2t\") pod \"community-operators-ccpdl\" (UID: \"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728\") " pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.702247 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-catalog-content\") pod \"community-operators-ccpdl\" (UID: \"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728\") " pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.702337 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-utilities\") pod \"community-operators-ccpdl\" (UID: \"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728\") " pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.702890 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-utilities\") pod \"community-operators-ccpdl\" (UID: \"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728\") " pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.702944 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-catalog-content\") pod \"community-operators-ccpdl\" (UID: \"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728\") " pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.733933 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9l2t\" (UniqueName: \"kubernetes.io/projected/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-kube-api-access-d9l2t\") pod \"community-operators-ccpdl\" (UID: \"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728\") " pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:18:58 crc kubenswrapper[4867]: I0214 05:18:58.889129 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:18:59 crc kubenswrapper[4867]: I0214 05:18:59.550333 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ccpdl"] Feb 14 05:19:00 crc kubenswrapper[4867]: I0214 05:19:00.056678 4867 generic.go:334] "Generic (PLEG): container finished" podID="4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" containerID="09cfccff68a249443672b44dc5ba251bcdd3149c3967997820634457c340ac96" exitCode=0 Feb 14 05:19:00 crc kubenswrapper[4867]: I0214 05:19:00.056731 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccpdl" event={"ID":"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728","Type":"ContainerDied","Data":"09cfccff68a249443672b44dc5ba251bcdd3149c3967997820634457c340ac96"} Feb 14 05:19:00 crc kubenswrapper[4867]: I0214 05:19:00.057815 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccpdl" event={"ID":"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728","Type":"ContainerStarted","Data":"b915280cad8dbdbca2545e56ab124792f335b9bf6c5c77908ebfda56bab51bc4"} Feb 14 05:19:01 crc kubenswrapper[4867]: I0214 05:19:01.250994 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:19:01 crc kubenswrapper[4867]: I0214 05:19:01.251728 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:19:01 crc kubenswrapper[4867]: I0214 05:19:01.251786 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 05:19:01 crc kubenswrapper[4867]: I0214 05:19:01.252699 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1e3602f7b703c67cfacb5cb1380c16876968a54c75c8bfed3061dc4a8fbe9713"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 05:19:01 crc kubenswrapper[4867]: I0214 05:19:01.252767 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://1e3602f7b703c67cfacb5cb1380c16876968a54c75c8bfed3061dc4a8fbe9713" gracePeriod=600 Feb 14 05:19:02 crc kubenswrapper[4867]: I0214 05:19:02.084019 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccpdl" event={"ID":"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728","Type":"ContainerStarted","Data":"41c0e6e6883de66e197f82001e61b38a735dc8ff5d59d0c42b5e561095eb0e8c"} Feb 14 05:19:02 crc kubenswrapper[4867]: I0214 05:19:02.087494 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="1e3602f7b703c67cfacb5cb1380c16876968a54c75c8bfed3061dc4a8fbe9713" exitCode=0 Feb 14 05:19:02 crc kubenswrapper[4867]: I0214 05:19:02.087560 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"1e3602f7b703c67cfacb5cb1380c16876968a54c75c8bfed3061dc4a8fbe9713"} Feb 14 05:19:02 crc kubenswrapper[4867]: I0214 05:19:02.087590 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a"} Feb 14 05:19:02 crc kubenswrapper[4867]: I0214 05:19:02.087614 4867 scope.go:117] "RemoveContainer" containerID="734a61f9c7ed9ca50b3d56703c2d5beedaf665574b56c30e78eaf04e359de025" Feb 14 05:19:04 crc kubenswrapper[4867]: I0214 05:19:04.132637 4867 generic.go:334] "Generic (PLEG): container finished" podID="4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" containerID="41c0e6e6883de66e197f82001e61b38a735dc8ff5d59d0c42b5e561095eb0e8c" exitCode=0 Feb 14 05:19:04 crc kubenswrapper[4867]: I0214 05:19:04.134248 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccpdl" event={"ID":"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728","Type":"ContainerDied","Data":"41c0e6e6883de66e197f82001e61b38a735dc8ff5d59d0c42b5e561095eb0e8c"} Feb 14 05:19:05 crc kubenswrapper[4867]: I0214 05:19:05.145890 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccpdl" event={"ID":"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728","Type":"ContainerStarted","Data":"4386cbb7e5d69769887f0ad9bfdb3e124aab417ec53b5bee032ddbbed7cdb381"} Feb 14 05:19:05 crc kubenswrapper[4867]: I0214 05:19:05.193715 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ccpdl" podStartSLOduration=2.747283703 podStartE2EDuration="7.19369229s" podCreationTimestamp="2026-02-14 05:18:58 +0000 UTC" firstStartedPulling="2026-02-14 05:19:00.058450881 +0000 UTC m=+4172.139388195" lastFinishedPulling="2026-02-14 05:19:04.504859468 +0000 UTC m=+4176.585796782" observedRunningTime="2026-02-14 05:19:05.180943356 +0000 UTC m=+4177.261880670" watchObservedRunningTime="2026-02-14 05:19:05.19369229 +0000 UTC m=+4177.274629624" Feb 14 05:19:08 crc kubenswrapper[4867]: I0214 05:19:08.889582 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:19:08 crc kubenswrapper[4867]: I0214 05:19:08.890109 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:19:09 crc kubenswrapper[4867]: I0214 05:19:09.942998 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-ccpdl" podUID="4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" containerName="registry-server" probeResult="failure" output=< Feb 14 05:19:09 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:19:09 crc kubenswrapper[4867]: > Feb 14 05:19:13 crc kubenswrapper[4867]: I0214 05:19:13.650881 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-49vhq"] Feb 14 05:19:13 crc kubenswrapper[4867]: I0214 05:19:13.654898 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:19:13 crc kubenswrapper[4867]: I0214 05:19:13.665617 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-49vhq"] Feb 14 05:19:13 crc kubenswrapper[4867]: I0214 05:19:13.712303 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-utilities\") pod \"redhat-operators-49vhq\" (UID: \"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c\") " pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:19:13 crc kubenswrapper[4867]: I0214 05:19:13.712562 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-catalog-content\") pod \"redhat-operators-49vhq\" (UID: \"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c\") " pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:19:13 crc kubenswrapper[4867]: I0214 05:19:13.712623 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5qjj\" (UniqueName: \"kubernetes.io/projected/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-kube-api-access-g5qjj\") pod \"redhat-operators-49vhq\" (UID: \"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c\") " pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:19:13 crc kubenswrapper[4867]: I0214 05:19:13.814867 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-catalog-content\") pod \"redhat-operators-49vhq\" (UID: \"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c\") " pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:19:13 crc kubenswrapper[4867]: I0214 05:19:13.814931 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5qjj\" (UniqueName: \"kubernetes.io/projected/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-kube-api-access-g5qjj\") pod \"redhat-operators-49vhq\" (UID: \"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c\") " pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:19:13 crc kubenswrapper[4867]: I0214 05:19:13.815018 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-utilities\") pod \"redhat-operators-49vhq\" (UID: \"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c\") " pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:19:13 crc kubenswrapper[4867]: I0214 05:19:13.815444 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-utilities\") pod \"redhat-operators-49vhq\" (UID: \"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c\") " pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:19:13 crc kubenswrapper[4867]: I0214 05:19:13.815450 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-catalog-content\") pod \"redhat-operators-49vhq\" (UID: \"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c\") " pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:19:13 crc kubenswrapper[4867]: I0214 05:19:13.833610 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5qjj\" (UniqueName: \"kubernetes.io/projected/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-kube-api-access-g5qjj\") pod \"redhat-operators-49vhq\" (UID: \"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c\") " pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:19:13 crc kubenswrapper[4867]: I0214 05:19:13.991145 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:19:14 crc kubenswrapper[4867]: I0214 05:19:14.506279 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-49vhq"] Feb 14 05:19:15 crc kubenswrapper[4867]: I0214 05:19:15.266831 4867 generic.go:334] "Generic (PLEG): container finished" podID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" containerID="9122fb4db282317ba1bf48a8c758a31672f3b60aaf65417427c85eedf72c5eab" exitCode=0 Feb 14 05:19:15 crc kubenswrapper[4867]: I0214 05:19:15.266910 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49vhq" event={"ID":"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c","Type":"ContainerDied","Data":"9122fb4db282317ba1bf48a8c758a31672f3b60aaf65417427c85eedf72c5eab"} Feb 14 05:19:15 crc kubenswrapper[4867]: I0214 05:19:15.267702 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49vhq" event={"ID":"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c","Type":"ContainerStarted","Data":"e85219fe308c4b890aa56acecb51d412bff27bd86fbcf1d4ca701931e660ccea"} Feb 14 05:19:17 crc kubenswrapper[4867]: I0214 05:19:17.296046 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49vhq" event={"ID":"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c","Type":"ContainerStarted","Data":"32e9234c8f61ccd977c1f7d1a44ad3e20f3d183b405ce1c31f2ab20e2ed59d21"} Feb 14 05:19:19 crc kubenswrapper[4867]: I0214 05:19:19.942803 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-ccpdl" podUID="4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" containerName="registry-server" probeResult="failure" output=< Feb 14 05:19:19 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:19:19 crc kubenswrapper[4867]: > Feb 14 05:19:22 crc kubenswrapper[4867]: I0214 05:19:22.868416 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2csc4"] Feb 14 05:19:22 crc kubenswrapper[4867]: I0214 05:19:22.872475 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:22 crc kubenswrapper[4867]: I0214 05:19:22.882501 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2csc4"] Feb 14 05:19:22 crc kubenswrapper[4867]: I0214 05:19:22.953845 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16024882-d3c8-413a-9619-789d77e9f477-utilities\") pod \"redhat-marketplace-2csc4\" (UID: \"16024882-d3c8-413a-9619-789d77e9f477\") " pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:22 crc kubenswrapper[4867]: I0214 05:19:22.954496 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5rck\" (UniqueName: \"kubernetes.io/projected/16024882-d3c8-413a-9619-789d77e9f477-kube-api-access-s5rck\") pod \"redhat-marketplace-2csc4\" (UID: \"16024882-d3c8-413a-9619-789d77e9f477\") " pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:22 crc kubenswrapper[4867]: I0214 05:19:22.954720 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16024882-d3c8-413a-9619-789d77e9f477-catalog-content\") pod \"redhat-marketplace-2csc4\" (UID: \"16024882-d3c8-413a-9619-789d77e9f477\") " pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:23 crc kubenswrapper[4867]: I0214 05:19:23.057072 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16024882-d3c8-413a-9619-789d77e9f477-utilities\") pod \"redhat-marketplace-2csc4\" (UID: \"16024882-d3c8-413a-9619-789d77e9f477\") " pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:23 crc kubenswrapper[4867]: I0214 05:19:23.057266 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5rck\" (UniqueName: \"kubernetes.io/projected/16024882-d3c8-413a-9619-789d77e9f477-kube-api-access-s5rck\") pod \"redhat-marketplace-2csc4\" (UID: \"16024882-d3c8-413a-9619-789d77e9f477\") " pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:23 crc kubenswrapper[4867]: I0214 05:19:23.057331 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16024882-d3c8-413a-9619-789d77e9f477-catalog-content\") pod \"redhat-marketplace-2csc4\" (UID: \"16024882-d3c8-413a-9619-789d77e9f477\") " pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:23 crc kubenswrapper[4867]: I0214 05:19:23.058353 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16024882-d3c8-413a-9619-789d77e9f477-utilities\") pod \"redhat-marketplace-2csc4\" (UID: \"16024882-d3c8-413a-9619-789d77e9f477\") " pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:23 crc kubenswrapper[4867]: I0214 05:19:23.058498 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16024882-d3c8-413a-9619-789d77e9f477-catalog-content\") pod \"redhat-marketplace-2csc4\" (UID: \"16024882-d3c8-413a-9619-789d77e9f477\") " pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:23 crc kubenswrapper[4867]: I0214 05:19:23.175868 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5rck\" (UniqueName: \"kubernetes.io/projected/16024882-d3c8-413a-9619-789d77e9f477-kube-api-access-s5rck\") pod \"redhat-marketplace-2csc4\" (UID: \"16024882-d3c8-413a-9619-789d77e9f477\") " pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:23 crc kubenswrapper[4867]: I0214 05:19:23.238222 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:23 crc kubenswrapper[4867]: I0214 05:19:23.884315 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2csc4"] Feb 14 05:19:23 crc kubenswrapper[4867]: W0214 05:19:23.894332 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16024882_d3c8_413a_9619_789d77e9f477.slice/crio-0bd87b1b60b2449316ba1f5fd108dff5a3e0deb8570fb277636fa1d8a6c12a91 WatchSource:0}: Error finding container 0bd87b1b60b2449316ba1f5fd108dff5a3e0deb8570fb277636fa1d8a6c12a91: Status 404 returned error can't find the container with id 0bd87b1b60b2449316ba1f5fd108dff5a3e0deb8570fb277636fa1d8a6c12a91 Feb 14 05:19:24 crc kubenswrapper[4867]: I0214 05:19:24.367970 4867 generic.go:334] "Generic (PLEG): container finished" podID="16024882-d3c8-413a-9619-789d77e9f477" containerID="a3c5aceb21a055ff97246bf194a65b6289deeec95ff415a7c557e033fc1ec697" exitCode=0 Feb 14 05:19:24 crc kubenswrapper[4867]: I0214 05:19:24.368026 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2csc4" event={"ID":"16024882-d3c8-413a-9619-789d77e9f477","Type":"ContainerDied","Data":"a3c5aceb21a055ff97246bf194a65b6289deeec95ff415a7c557e033fc1ec697"} Feb 14 05:19:24 crc kubenswrapper[4867]: I0214 05:19:24.368058 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2csc4" event={"ID":"16024882-d3c8-413a-9619-789d77e9f477","Type":"ContainerStarted","Data":"0bd87b1b60b2449316ba1f5fd108dff5a3e0deb8570fb277636fa1d8a6c12a91"} Feb 14 05:19:26 crc kubenswrapper[4867]: I0214 05:19:26.392720 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2csc4" event={"ID":"16024882-d3c8-413a-9619-789d77e9f477","Type":"ContainerStarted","Data":"53de0796beeedb338ef0361a1500f7f5f5ce4be4c9101baa657898e01e6ceb21"} Feb 14 05:19:28 crc kubenswrapper[4867]: I0214 05:19:28.414409 4867 generic.go:334] "Generic (PLEG): container finished" podID="16024882-d3c8-413a-9619-789d77e9f477" containerID="53de0796beeedb338ef0361a1500f7f5f5ce4be4c9101baa657898e01e6ceb21" exitCode=0 Feb 14 05:19:28 crc kubenswrapper[4867]: I0214 05:19:28.414484 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2csc4" event={"ID":"16024882-d3c8-413a-9619-789d77e9f477","Type":"ContainerDied","Data":"53de0796beeedb338ef0361a1500f7f5f5ce4be4c9101baa657898e01e6ceb21"} Feb 14 05:19:28 crc kubenswrapper[4867]: I0214 05:19:28.971753 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:19:29 crc kubenswrapper[4867]: I0214 05:19:29.059529 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:19:29 crc kubenswrapper[4867]: I0214 05:19:29.429242 4867 generic.go:334] "Generic (PLEG): container finished" podID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" containerID="32e9234c8f61ccd977c1f7d1a44ad3e20f3d183b405ce1c31f2ab20e2ed59d21" exitCode=0 Feb 14 05:19:29 crc kubenswrapper[4867]: I0214 05:19:29.429316 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49vhq" event={"ID":"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c","Type":"ContainerDied","Data":"32e9234c8f61ccd977c1f7d1a44ad3e20f3d183b405ce1c31f2ab20e2ed59d21"} Feb 14 05:19:29 crc kubenswrapper[4867]: I0214 05:19:29.434236 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2csc4" event={"ID":"16024882-d3c8-413a-9619-789d77e9f477","Type":"ContainerStarted","Data":"80da5021132b233f94e0d7aa02f156221b27d610aad0017992e3be79a849895f"} Feb 14 05:19:29 crc kubenswrapper[4867]: I0214 05:19:29.497338 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2csc4" podStartSLOduration=3.077698623 podStartE2EDuration="7.497297826s" podCreationTimestamp="2026-02-14 05:19:22 +0000 UTC" firstStartedPulling="2026-02-14 05:19:24.370666613 +0000 UTC m=+4196.451603927" lastFinishedPulling="2026-02-14 05:19:28.790265816 +0000 UTC m=+4200.871203130" observedRunningTime="2026-02-14 05:19:29.484237553 +0000 UTC m=+4201.565174887" watchObservedRunningTime="2026-02-14 05:19:29.497297826 +0000 UTC m=+4201.578235140" Feb 14 05:19:30 crc kubenswrapper[4867]: I0214 05:19:30.856583 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ccpdl"] Feb 14 05:19:30 crc kubenswrapper[4867]: I0214 05:19:30.857868 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ccpdl" podUID="4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" containerName="registry-server" containerID="cri-o://4386cbb7e5d69769887f0ad9bfdb3e124aab417ec53b5bee032ddbbed7cdb381" gracePeriod=2 Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.477632 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.578878 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9l2t\" (UniqueName: \"kubernetes.io/projected/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-kube-api-access-d9l2t\") pod \"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728\" (UID: \"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728\") " Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.578992 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-catalog-content\") pod \"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728\" (UID: \"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728\") " Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.579177 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-utilities\") pod \"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728\" (UID: \"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728\") " Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.579701 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-utilities" (OuterVolumeSpecName: "utilities") pod "4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" (UID: "4cd89cb2-e3ca-4d2c-8ac0-55877cda3728"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.580254 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.584747 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-kube-api-access-d9l2t" (OuterVolumeSpecName: "kube-api-access-d9l2t") pod "4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" (UID: "4cd89cb2-e3ca-4d2c-8ac0-55877cda3728"). InnerVolumeSpecName "kube-api-access-d9l2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.631917 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" (UID: "4cd89cb2-e3ca-4d2c-8ac0-55877cda3728"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.681619 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9l2t\" (UniqueName: \"kubernetes.io/projected/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-kube-api-access-d9l2t\") on node \"crc\" DevicePath \"\"" Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.681650 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.837389 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49vhq" event={"ID":"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c","Type":"ContainerStarted","Data":"8637c55cd33254d8b0ce51e872c705e2de303e15fe068a58f6f17f2158a18ae2"} Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.840119 4867 generic.go:334] "Generic (PLEG): container finished" podID="4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" containerID="4386cbb7e5d69769887f0ad9bfdb3e124aab417ec53b5bee032ddbbed7cdb381" exitCode=0 Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.840160 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ccpdl" Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.840165 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccpdl" event={"ID":"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728","Type":"ContainerDied","Data":"4386cbb7e5d69769887f0ad9bfdb3e124aab417ec53b5bee032ddbbed7cdb381"} Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.840202 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ccpdl" event={"ID":"4cd89cb2-e3ca-4d2c-8ac0-55877cda3728","Type":"ContainerDied","Data":"b915280cad8dbdbca2545e56ab124792f335b9bf6c5c77908ebfda56bab51bc4"} Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.840223 4867 scope.go:117] "RemoveContainer" containerID="4386cbb7e5d69769887f0ad9bfdb3e124aab417ec53b5bee032ddbbed7cdb381" Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.873118 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-49vhq" podStartSLOduration=4.112191528 podStartE2EDuration="18.873099629s" podCreationTimestamp="2026-02-14 05:19:13 +0000 UTC" firstStartedPulling="2026-02-14 05:19:15.269028069 +0000 UTC m=+4187.349965383" lastFinishedPulling="2026-02-14 05:19:30.02993617 +0000 UTC m=+4202.110873484" observedRunningTime="2026-02-14 05:19:31.863022814 +0000 UTC m=+4203.943960148" watchObservedRunningTime="2026-02-14 05:19:31.873099629 +0000 UTC m=+4203.954036943" Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.881755 4867 scope.go:117] "RemoveContainer" containerID="41c0e6e6883de66e197f82001e61b38a735dc8ff5d59d0c42b5e561095eb0e8c" Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.901497 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ccpdl"] Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.914357 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ccpdl"] Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.959213 4867 scope.go:117] "RemoveContainer" containerID="09cfccff68a249443672b44dc5ba251bcdd3149c3967997820634457c340ac96" Feb 14 05:19:31 crc kubenswrapper[4867]: I0214 05:19:31.998694 4867 scope.go:117] "RemoveContainer" containerID="4386cbb7e5d69769887f0ad9bfdb3e124aab417ec53b5bee032ddbbed7cdb381" Feb 14 05:19:32 crc kubenswrapper[4867]: E0214 05:19:31.999857 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4386cbb7e5d69769887f0ad9bfdb3e124aab417ec53b5bee032ddbbed7cdb381\": container with ID starting with 4386cbb7e5d69769887f0ad9bfdb3e124aab417ec53b5bee032ddbbed7cdb381 not found: ID does not exist" containerID="4386cbb7e5d69769887f0ad9bfdb3e124aab417ec53b5bee032ddbbed7cdb381" Feb 14 05:19:32 crc kubenswrapper[4867]: I0214 05:19:31.999899 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4386cbb7e5d69769887f0ad9bfdb3e124aab417ec53b5bee032ddbbed7cdb381"} err="failed to get container status \"4386cbb7e5d69769887f0ad9bfdb3e124aab417ec53b5bee032ddbbed7cdb381\": rpc error: code = NotFound desc = could not find container \"4386cbb7e5d69769887f0ad9bfdb3e124aab417ec53b5bee032ddbbed7cdb381\": container with ID starting with 4386cbb7e5d69769887f0ad9bfdb3e124aab417ec53b5bee032ddbbed7cdb381 not found: ID does not exist" Feb 14 05:19:32 crc kubenswrapper[4867]: I0214 05:19:31.999924 4867 scope.go:117] "RemoveContainer" containerID="41c0e6e6883de66e197f82001e61b38a735dc8ff5d59d0c42b5e561095eb0e8c" Feb 14 05:19:32 crc kubenswrapper[4867]: E0214 05:19:32.002832 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41c0e6e6883de66e197f82001e61b38a735dc8ff5d59d0c42b5e561095eb0e8c\": container with ID starting with 41c0e6e6883de66e197f82001e61b38a735dc8ff5d59d0c42b5e561095eb0e8c not found: ID does not exist" containerID="41c0e6e6883de66e197f82001e61b38a735dc8ff5d59d0c42b5e561095eb0e8c" Feb 14 05:19:32 crc kubenswrapper[4867]: I0214 05:19:32.002861 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41c0e6e6883de66e197f82001e61b38a735dc8ff5d59d0c42b5e561095eb0e8c"} err="failed to get container status \"41c0e6e6883de66e197f82001e61b38a735dc8ff5d59d0c42b5e561095eb0e8c\": rpc error: code = NotFound desc = could not find container \"41c0e6e6883de66e197f82001e61b38a735dc8ff5d59d0c42b5e561095eb0e8c\": container with ID starting with 41c0e6e6883de66e197f82001e61b38a735dc8ff5d59d0c42b5e561095eb0e8c not found: ID does not exist" Feb 14 05:19:32 crc kubenswrapper[4867]: I0214 05:19:32.002876 4867 scope.go:117] "RemoveContainer" containerID="09cfccff68a249443672b44dc5ba251bcdd3149c3967997820634457c340ac96" Feb 14 05:19:32 crc kubenswrapper[4867]: E0214 05:19:32.003740 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09cfccff68a249443672b44dc5ba251bcdd3149c3967997820634457c340ac96\": container with ID starting with 09cfccff68a249443672b44dc5ba251bcdd3149c3967997820634457c340ac96 not found: ID does not exist" containerID="09cfccff68a249443672b44dc5ba251bcdd3149c3967997820634457c340ac96" Feb 14 05:19:32 crc kubenswrapper[4867]: I0214 05:19:32.003766 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09cfccff68a249443672b44dc5ba251bcdd3149c3967997820634457c340ac96"} err="failed to get container status \"09cfccff68a249443672b44dc5ba251bcdd3149c3967997820634457c340ac96\": rpc error: code = NotFound desc = could not find container \"09cfccff68a249443672b44dc5ba251bcdd3149c3967997820634457c340ac96\": container with ID starting with 09cfccff68a249443672b44dc5ba251bcdd3149c3967997820634457c340ac96 not found: ID does not exist" Feb 14 05:19:33 crc kubenswrapper[4867]: I0214 05:19:33.009401 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" path="/var/lib/kubelet/pods/4cd89cb2-e3ca-4d2c-8ac0-55877cda3728/volumes" Feb 14 05:19:33 crc kubenswrapper[4867]: I0214 05:19:33.239883 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:33 crc kubenswrapper[4867]: I0214 05:19:33.239929 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:33 crc kubenswrapper[4867]: I0214 05:19:33.991971 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:19:33 crc kubenswrapper[4867]: I0214 05:19:33.992238 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:19:35 crc kubenswrapper[4867]: I0214 05:19:35.002903 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-2csc4" podUID="16024882-d3c8-413a-9619-789d77e9f477" containerName="registry-server" probeResult="failure" output=< Feb 14 05:19:35 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:19:35 crc kubenswrapper[4867]: > Feb 14 05:19:35 crc kubenswrapper[4867]: I0214 05:19:35.048895 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-49vhq" podUID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" containerName="registry-server" probeResult="failure" output=< Feb 14 05:19:35 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:19:35 crc kubenswrapper[4867]: > Feb 14 05:19:43 crc kubenswrapper[4867]: I0214 05:19:43.293706 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:43 crc kubenswrapper[4867]: I0214 05:19:43.356049 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:43 crc kubenswrapper[4867]: I0214 05:19:43.539571 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2csc4"] Feb 14 05:19:45 crc kubenswrapper[4867]: I0214 05:19:45.015165 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2csc4" podUID="16024882-d3c8-413a-9619-789d77e9f477" containerName="registry-server" containerID="cri-o://80da5021132b233f94e0d7aa02f156221b27d610aad0017992e3be79a849895f" gracePeriod=2 Feb 14 05:19:45 crc kubenswrapper[4867]: I0214 05:19:45.043579 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-49vhq" podUID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" containerName="registry-server" probeResult="failure" output=< Feb 14 05:19:45 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:19:45 crc kubenswrapper[4867]: > Feb 14 05:19:45 crc kubenswrapper[4867]: I0214 05:19:45.552684 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:45 crc kubenswrapper[4867]: I0214 05:19:45.673360 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16024882-d3c8-413a-9619-789d77e9f477-catalog-content\") pod \"16024882-d3c8-413a-9619-789d77e9f477\" (UID: \"16024882-d3c8-413a-9619-789d77e9f477\") " Feb 14 05:19:45 crc kubenswrapper[4867]: I0214 05:19:45.673886 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5rck\" (UniqueName: \"kubernetes.io/projected/16024882-d3c8-413a-9619-789d77e9f477-kube-api-access-s5rck\") pod \"16024882-d3c8-413a-9619-789d77e9f477\" (UID: \"16024882-d3c8-413a-9619-789d77e9f477\") " Feb 14 05:19:45 crc kubenswrapper[4867]: I0214 05:19:45.674057 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16024882-d3c8-413a-9619-789d77e9f477-utilities\") pod \"16024882-d3c8-413a-9619-789d77e9f477\" (UID: \"16024882-d3c8-413a-9619-789d77e9f477\") " Feb 14 05:19:45 crc kubenswrapper[4867]: I0214 05:19:45.674651 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16024882-d3c8-413a-9619-789d77e9f477-utilities" (OuterVolumeSpecName: "utilities") pod "16024882-d3c8-413a-9619-789d77e9f477" (UID: "16024882-d3c8-413a-9619-789d77e9f477"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:19:45 crc kubenswrapper[4867]: I0214 05:19:45.679392 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16024882-d3c8-413a-9619-789d77e9f477-kube-api-access-s5rck" (OuterVolumeSpecName: "kube-api-access-s5rck") pod "16024882-d3c8-413a-9619-789d77e9f477" (UID: "16024882-d3c8-413a-9619-789d77e9f477"). InnerVolumeSpecName "kube-api-access-s5rck". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:19:45 crc kubenswrapper[4867]: I0214 05:19:45.701636 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16024882-d3c8-413a-9619-789d77e9f477-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16024882-d3c8-413a-9619-789d77e9f477" (UID: "16024882-d3c8-413a-9619-789d77e9f477"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:19:45 crc kubenswrapper[4867]: I0214 05:19:45.777477 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5rck\" (UniqueName: \"kubernetes.io/projected/16024882-d3c8-413a-9619-789d77e9f477-kube-api-access-s5rck\") on node \"crc\" DevicePath \"\"" Feb 14 05:19:45 crc kubenswrapper[4867]: I0214 05:19:45.777523 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16024882-d3c8-413a-9619-789d77e9f477-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:19:45 crc kubenswrapper[4867]: I0214 05:19:45.777535 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16024882-d3c8-413a-9619-789d77e9f477-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:19:46 crc kubenswrapper[4867]: I0214 05:19:46.028274 4867 generic.go:334] "Generic (PLEG): container finished" podID="16024882-d3c8-413a-9619-789d77e9f477" containerID="80da5021132b233f94e0d7aa02f156221b27d610aad0017992e3be79a849895f" exitCode=0 Feb 14 05:19:46 crc kubenswrapper[4867]: I0214 05:19:46.028609 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2csc4" event={"ID":"16024882-d3c8-413a-9619-789d77e9f477","Type":"ContainerDied","Data":"80da5021132b233f94e0d7aa02f156221b27d610aad0017992e3be79a849895f"} Feb 14 05:19:46 crc kubenswrapper[4867]: I0214 05:19:46.028642 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2csc4" event={"ID":"16024882-d3c8-413a-9619-789d77e9f477","Type":"ContainerDied","Data":"0bd87b1b60b2449316ba1f5fd108dff5a3e0deb8570fb277636fa1d8a6c12a91"} Feb 14 05:19:46 crc kubenswrapper[4867]: I0214 05:19:46.028662 4867 scope.go:117] "RemoveContainer" containerID="80da5021132b233f94e0d7aa02f156221b27d610aad0017992e3be79a849895f" Feb 14 05:19:46 crc kubenswrapper[4867]: I0214 05:19:46.028831 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2csc4" Feb 14 05:19:46 crc kubenswrapper[4867]: I0214 05:19:46.070255 4867 scope.go:117] "RemoveContainer" containerID="53de0796beeedb338ef0361a1500f7f5f5ce4be4c9101baa657898e01e6ceb21" Feb 14 05:19:46 crc kubenswrapper[4867]: I0214 05:19:46.107583 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2csc4"] Feb 14 05:19:46 crc kubenswrapper[4867]: I0214 05:19:46.120198 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2csc4"] Feb 14 05:19:46 crc kubenswrapper[4867]: I0214 05:19:46.207930 4867 scope.go:117] "RemoveContainer" containerID="a3c5aceb21a055ff97246bf194a65b6289deeec95ff415a7c557e033fc1ec697" Feb 14 05:19:46 crc kubenswrapper[4867]: I0214 05:19:46.240779 4867 scope.go:117] "RemoveContainer" containerID="80da5021132b233f94e0d7aa02f156221b27d610aad0017992e3be79a849895f" Feb 14 05:19:46 crc kubenswrapper[4867]: E0214 05:19:46.241676 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80da5021132b233f94e0d7aa02f156221b27d610aad0017992e3be79a849895f\": container with ID starting with 80da5021132b233f94e0d7aa02f156221b27d610aad0017992e3be79a849895f not found: ID does not exist" containerID="80da5021132b233f94e0d7aa02f156221b27d610aad0017992e3be79a849895f" Feb 14 05:19:46 crc kubenswrapper[4867]: I0214 05:19:46.242202 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80da5021132b233f94e0d7aa02f156221b27d610aad0017992e3be79a849895f"} err="failed to get container status \"80da5021132b233f94e0d7aa02f156221b27d610aad0017992e3be79a849895f\": rpc error: code = NotFound desc = could not find container \"80da5021132b233f94e0d7aa02f156221b27d610aad0017992e3be79a849895f\": container with ID starting with 80da5021132b233f94e0d7aa02f156221b27d610aad0017992e3be79a849895f not found: ID does not exist" Feb 14 05:19:46 crc kubenswrapper[4867]: I0214 05:19:46.242417 4867 scope.go:117] "RemoveContainer" containerID="53de0796beeedb338ef0361a1500f7f5f5ce4be4c9101baa657898e01e6ceb21" Feb 14 05:19:46 crc kubenswrapper[4867]: E0214 05:19:46.243337 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53de0796beeedb338ef0361a1500f7f5f5ce4be4c9101baa657898e01e6ceb21\": container with ID starting with 53de0796beeedb338ef0361a1500f7f5f5ce4be4c9101baa657898e01e6ceb21 not found: ID does not exist" containerID="53de0796beeedb338ef0361a1500f7f5f5ce4be4c9101baa657898e01e6ceb21" Feb 14 05:19:46 crc kubenswrapper[4867]: I0214 05:19:46.243381 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53de0796beeedb338ef0361a1500f7f5f5ce4be4c9101baa657898e01e6ceb21"} err="failed to get container status \"53de0796beeedb338ef0361a1500f7f5f5ce4be4c9101baa657898e01e6ceb21\": rpc error: code = NotFound desc = could not find container \"53de0796beeedb338ef0361a1500f7f5f5ce4be4c9101baa657898e01e6ceb21\": container with ID starting with 53de0796beeedb338ef0361a1500f7f5f5ce4be4c9101baa657898e01e6ceb21 not found: ID does not exist" Feb 14 05:19:46 crc kubenswrapper[4867]: I0214 05:19:46.243396 4867 scope.go:117] "RemoveContainer" containerID="a3c5aceb21a055ff97246bf194a65b6289deeec95ff415a7c557e033fc1ec697" Feb 14 05:19:46 crc kubenswrapper[4867]: E0214 05:19:46.243815 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3c5aceb21a055ff97246bf194a65b6289deeec95ff415a7c557e033fc1ec697\": container with ID starting with a3c5aceb21a055ff97246bf194a65b6289deeec95ff415a7c557e033fc1ec697 not found: ID does not exist" containerID="a3c5aceb21a055ff97246bf194a65b6289deeec95ff415a7c557e033fc1ec697" Feb 14 05:19:46 crc kubenswrapper[4867]: I0214 05:19:46.243883 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3c5aceb21a055ff97246bf194a65b6289deeec95ff415a7c557e033fc1ec697"} err="failed to get container status \"a3c5aceb21a055ff97246bf194a65b6289deeec95ff415a7c557e033fc1ec697\": rpc error: code = NotFound desc = could not find container \"a3c5aceb21a055ff97246bf194a65b6289deeec95ff415a7c557e033fc1ec697\": container with ID starting with a3c5aceb21a055ff97246bf194a65b6289deeec95ff415a7c557e033fc1ec697 not found: ID does not exist" Feb 14 05:19:47 crc kubenswrapper[4867]: I0214 05:19:47.012177 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16024882-d3c8-413a-9619-789d77e9f477" path="/var/lib/kubelet/pods/16024882-d3c8-413a-9619-789d77e9f477/volumes" Feb 14 05:19:55 crc kubenswrapper[4867]: I0214 05:19:55.038374 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-49vhq" podUID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" containerName="registry-server" probeResult="failure" output=< Feb 14 05:19:55 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:19:55 crc kubenswrapper[4867]: > Feb 14 05:20:05 crc kubenswrapper[4867]: I0214 05:20:05.070851 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-49vhq" podUID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" containerName="registry-server" probeResult="failure" output=< Feb 14 05:20:05 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:20:05 crc kubenswrapper[4867]: > Feb 14 05:20:15 crc kubenswrapper[4867]: I0214 05:20:15.044271 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-49vhq" podUID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" containerName="registry-server" probeResult="failure" output=< Feb 14 05:20:15 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:20:15 crc kubenswrapper[4867]: > Feb 14 05:20:24 crc kubenswrapper[4867]: I0214 05:20:24.038669 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:20:24 crc kubenswrapper[4867]: I0214 05:20:24.094019 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:20:24 crc kubenswrapper[4867]: I0214 05:20:24.276001 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-49vhq"] Feb 14 05:20:25 crc kubenswrapper[4867]: I0214 05:20:25.445290 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-49vhq" podUID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" containerName="registry-server" containerID="cri-o://8637c55cd33254d8b0ce51e872c705e2de303e15fe068a58f6f17f2158a18ae2" gracePeriod=2 Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.141684 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.283036 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-catalog-content\") pod \"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c\" (UID: \"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c\") " Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.283201 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5qjj\" (UniqueName: \"kubernetes.io/projected/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-kube-api-access-g5qjj\") pod \"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c\" (UID: \"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c\") " Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.283345 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-utilities\") pod \"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c\" (UID: \"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c\") " Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.289858 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-kube-api-access-g5qjj" (OuterVolumeSpecName: "kube-api-access-g5qjj") pod "5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" (UID: "5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c"). InnerVolumeSpecName "kube-api-access-g5qjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.304850 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-utilities" (OuterVolumeSpecName: "utilities") pod "5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" (UID: "5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.386213 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5qjj\" (UniqueName: \"kubernetes.io/projected/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-kube-api-access-g5qjj\") on node \"crc\" DevicePath \"\"" Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.386250 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.437346 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" (UID: "5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.458204 4867 generic.go:334] "Generic (PLEG): container finished" podID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" containerID="8637c55cd33254d8b0ce51e872c705e2de303e15fe068a58f6f17f2158a18ae2" exitCode=0 Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.458246 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49vhq" event={"ID":"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c","Type":"ContainerDied","Data":"8637c55cd33254d8b0ce51e872c705e2de303e15fe068a58f6f17f2158a18ae2"} Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.458271 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49vhq" event={"ID":"5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c","Type":"ContainerDied","Data":"e85219fe308c4b890aa56acecb51d412bff27bd86fbcf1d4ca701931e660ccea"} Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.458288 4867 scope.go:117] "RemoveContainer" containerID="8637c55cd33254d8b0ce51e872c705e2de303e15fe068a58f6f17f2158a18ae2" Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.458417 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-49vhq" Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.485256 4867 scope.go:117] "RemoveContainer" containerID="32e9234c8f61ccd977c1f7d1a44ad3e20f3d183b405ce1c31f2ab20e2ed59d21" Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.489678 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.493090 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-49vhq"] Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.504748 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-49vhq"] Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.508125 4867 scope.go:117] "RemoveContainer" containerID="9122fb4db282317ba1bf48a8c758a31672f3b60aaf65417427c85eedf72c5eab" Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.600861 4867 scope.go:117] "RemoveContainer" containerID="8637c55cd33254d8b0ce51e872c705e2de303e15fe068a58f6f17f2158a18ae2" Feb 14 05:20:26 crc kubenswrapper[4867]: E0214 05:20:26.606183 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8637c55cd33254d8b0ce51e872c705e2de303e15fe068a58f6f17f2158a18ae2\": container with ID starting with 8637c55cd33254d8b0ce51e872c705e2de303e15fe068a58f6f17f2158a18ae2 not found: ID does not exist" containerID="8637c55cd33254d8b0ce51e872c705e2de303e15fe068a58f6f17f2158a18ae2" Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.606246 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8637c55cd33254d8b0ce51e872c705e2de303e15fe068a58f6f17f2158a18ae2"} err="failed to get container status \"8637c55cd33254d8b0ce51e872c705e2de303e15fe068a58f6f17f2158a18ae2\": rpc error: code = NotFound desc = could not find container \"8637c55cd33254d8b0ce51e872c705e2de303e15fe068a58f6f17f2158a18ae2\": container with ID starting with 8637c55cd33254d8b0ce51e872c705e2de303e15fe068a58f6f17f2158a18ae2 not found: ID does not exist" Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.606289 4867 scope.go:117] "RemoveContainer" containerID="32e9234c8f61ccd977c1f7d1a44ad3e20f3d183b405ce1c31f2ab20e2ed59d21" Feb 14 05:20:26 crc kubenswrapper[4867]: E0214 05:20:26.608191 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32e9234c8f61ccd977c1f7d1a44ad3e20f3d183b405ce1c31f2ab20e2ed59d21\": container with ID starting with 32e9234c8f61ccd977c1f7d1a44ad3e20f3d183b405ce1c31f2ab20e2ed59d21 not found: ID does not exist" containerID="32e9234c8f61ccd977c1f7d1a44ad3e20f3d183b405ce1c31f2ab20e2ed59d21" Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.608249 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32e9234c8f61ccd977c1f7d1a44ad3e20f3d183b405ce1c31f2ab20e2ed59d21"} err="failed to get container status \"32e9234c8f61ccd977c1f7d1a44ad3e20f3d183b405ce1c31f2ab20e2ed59d21\": rpc error: code = NotFound desc = could not find container \"32e9234c8f61ccd977c1f7d1a44ad3e20f3d183b405ce1c31f2ab20e2ed59d21\": container with ID starting with 32e9234c8f61ccd977c1f7d1a44ad3e20f3d183b405ce1c31f2ab20e2ed59d21 not found: ID does not exist" Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.608265 4867 scope.go:117] "RemoveContainer" containerID="9122fb4db282317ba1bf48a8c758a31672f3b60aaf65417427c85eedf72c5eab" Feb 14 05:20:26 crc kubenswrapper[4867]: E0214 05:20:26.616700 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9122fb4db282317ba1bf48a8c758a31672f3b60aaf65417427c85eedf72c5eab\": container with ID starting with 9122fb4db282317ba1bf48a8c758a31672f3b60aaf65417427c85eedf72c5eab not found: ID does not exist" containerID="9122fb4db282317ba1bf48a8c758a31672f3b60aaf65417427c85eedf72c5eab" Feb 14 05:20:26 crc kubenswrapper[4867]: I0214 05:20:26.616757 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9122fb4db282317ba1bf48a8c758a31672f3b60aaf65417427c85eedf72c5eab"} err="failed to get container status \"9122fb4db282317ba1bf48a8c758a31672f3b60aaf65417427c85eedf72c5eab\": rpc error: code = NotFound desc = could not find container \"9122fb4db282317ba1bf48a8c758a31672f3b60aaf65417427c85eedf72c5eab\": container with ID starting with 9122fb4db282317ba1bf48a8c758a31672f3b60aaf65417427c85eedf72c5eab not found: ID does not exist" Feb 14 05:20:27 crc kubenswrapper[4867]: I0214 05:20:27.009031 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" path="/var/lib/kubelet/pods/5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c/volumes" Feb 14 05:21:01 crc kubenswrapper[4867]: I0214 05:21:01.250880 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:21:01 crc kubenswrapper[4867]: I0214 05:21:01.251368 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:21:08 crc kubenswrapper[4867]: I0214 05:21:08.761067 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-k6p82" podUID="ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa" containerName="nmstate-handler" probeResult="failure" output="command timed out" Feb 14 05:21:31 crc kubenswrapper[4867]: I0214 05:21:31.250381 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:21:31 crc kubenswrapper[4867]: I0214 05:21:31.250870 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:22:01 crc kubenswrapper[4867]: I0214 05:22:01.251110 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:22:01 crc kubenswrapper[4867]: I0214 05:22:01.251626 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:22:01 crc kubenswrapper[4867]: I0214 05:22:01.251669 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 05:22:01 crc kubenswrapper[4867]: I0214 05:22:01.252468 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 05:22:01 crc kubenswrapper[4867]: I0214 05:22:01.252526 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" gracePeriod=600 Feb 14 05:22:01 crc kubenswrapper[4867]: E0214 05:22:01.372906 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:22:01 crc kubenswrapper[4867]: I0214 05:22:01.537574 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" exitCode=0 Feb 14 05:22:01 crc kubenswrapper[4867]: I0214 05:22:01.537649 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a"} Feb 14 05:22:01 crc kubenswrapper[4867]: I0214 05:22:01.537904 4867 scope.go:117] "RemoveContainer" containerID="1e3602f7b703c67cfacb5cb1380c16876968a54c75c8bfed3061dc4a8fbe9713" Feb 14 05:22:01 crc kubenswrapper[4867]: I0214 05:22:01.538752 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:22:01 crc kubenswrapper[4867]: E0214 05:22:01.539042 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:22:16 crc kubenswrapper[4867]: I0214 05:22:16.997480 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:22:16 crc kubenswrapper[4867]: E0214 05:22:16.998575 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:22:27 crc kubenswrapper[4867]: I0214 05:22:27.997815 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:22:27 crc kubenswrapper[4867]: E0214 05:22:27.998623 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:22:39 crc kubenswrapper[4867]: I0214 05:22:39.997842 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:22:39 crc kubenswrapper[4867]: E0214 05:22:39.998621 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:22:51 crc kubenswrapper[4867]: I0214 05:22:51.997736 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:22:51 crc kubenswrapper[4867]: E0214 05:22:51.998710 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:23:06 crc kubenswrapper[4867]: I0214 05:23:06.997833 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:23:06 crc kubenswrapper[4867]: E0214 05:23:06.998838 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:23:17 crc kubenswrapper[4867]: I0214 05:23:17.998179 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:23:18 crc kubenswrapper[4867]: E0214 05:23:17.999256 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:23:29 crc kubenswrapper[4867]: I0214 05:23:29.009660 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:23:29 crc kubenswrapper[4867]: E0214 05:23:29.010559 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:23:41 crc kubenswrapper[4867]: I0214 05:23:41.997081 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:23:41 crc kubenswrapper[4867]: E0214 05:23:41.997811 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:23:53 crc kubenswrapper[4867]: I0214 05:23:53.997805 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:23:53 crc kubenswrapper[4867]: E0214 05:23:53.998777 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:24:05 crc kubenswrapper[4867]: I0214 05:24:05.998024 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:24:05 crc kubenswrapper[4867]: E0214 05:24:05.999063 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:24:21 crc kubenswrapper[4867]: I0214 05:24:21.002233 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:24:21 crc kubenswrapper[4867]: E0214 05:24:21.004356 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:24:34 crc kubenswrapper[4867]: I0214 05:24:34.997545 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:24:34 crc kubenswrapper[4867]: E0214 05:24:34.998333 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:24:45 crc kubenswrapper[4867]: I0214 05:24:45.998569 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:24:46 crc kubenswrapper[4867]: E0214 05:24:45.999767 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:24:57 crc kubenswrapper[4867]: I0214 05:24:57.998099 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:24:58 crc kubenswrapper[4867]: E0214 05:24:57.999069 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:25:09 crc kubenswrapper[4867]: I0214 05:25:09.004826 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:25:09 crc kubenswrapper[4867]: E0214 05:25:09.005701 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:25:21 crc kubenswrapper[4867]: I0214 05:25:21.997552 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:25:21 crc kubenswrapper[4867]: E0214 05:25:21.998429 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:25:35 crc kubenswrapper[4867]: I0214 05:25:35.997996 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:25:35 crc kubenswrapper[4867]: E0214 05:25:35.998910 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:25:49 crc kubenswrapper[4867]: I0214 05:25:49.997383 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:25:49 crc kubenswrapper[4867]: E0214 05:25:49.998177 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:26:03 crc kubenswrapper[4867]: I0214 05:26:03.997804 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:26:03 crc kubenswrapper[4867]: E0214 05:26:03.998583 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:26:16 crc kubenswrapper[4867]: I0214 05:26:16.997179 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:26:16 crc kubenswrapper[4867]: E0214 05:26:16.997943 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:26:30 crc kubenswrapper[4867]: I0214 05:26:30.999203 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:26:31 crc kubenswrapper[4867]: E0214 05:26:31.000021 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.249426 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 14 05:26:37 crc kubenswrapper[4867]: E0214 05:26:37.250341 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" containerName="extract-content" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.250355 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" containerName="extract-content" Feb 14 05:26:37 crc kubenswrapper[4867]: E0214 05:26:37.250374 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16024882-d3c8-413a-9619-789d77e9f477" containerName="extract-utilities" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.250381 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="16024882-d3c8-413a-9619-789d77e9f477" containerName="extract-utilities" Feb 14 05:26:37 crc kubenswrapper[4867]: E0214 05:26:37.250392 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16024882-d3c8-413a-9619-789d77e9f477" containerName="registry-server" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.250400 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="16024882-d3c8-413a-9619-789d77e9f477" containerName="registry-server" Feb 14 05:26:37 crc kubenswrapper[4867]: E0214 05:26:37.250409 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" containerName="extract-utilities" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.250415 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" containerName="extract-utilities" Feb 14 05:26:37 crc kubenswrapper[4867]: E0214 05:26:37.250426 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" containerName="registry-server" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.250432 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" containerName="registry-server" Feb 14 05:26:37 crc kubenswrapper[4867]: E0214 05:26:37.250445 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16024882-d3c8-413a-9619-789d77e9f477" containerName="extract-content" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.250451 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="16024882-d3c8-413a-9619-789d77e9f477" containerName="extract-content" Feb 14 05:26:37 crc kubenswrapper[4867]: E0214 05:26:37.250461 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" containerName="registry-server" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.250466 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" containerName="registry-server" Feb 14 05:26:37 crc kubenswrapper[4867]: E0214 05:26:37.250481 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" containerName="extract-content" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.250487 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" containerName="extract-content" Feb 14 05:26:37 crc kubenswrapper[4867]: E0214 05:26:37.250528 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" containerName="extract-utilities" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.250534 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" containerName="extract-utilities" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.250739 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cd89cb2-e3ca-4d2c-8ac0-55877cda3728" containerName="registry-server" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.250755 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="5313c78d-a1fc-4a5d-b4a9-67c1c7c1675c" containerName="registry-server" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.250774 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="16024882-d3c8-413a-9619-789d77e9f477" containerName="registry-server" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.251549 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.254610 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.254815 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.256029 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-wxg74" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.263489 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.271916 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.357332 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a161c594-8af3-458f-911a-bbf51e7bfcdd-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.357403 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.357434 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a161c594-8af3-458f-911a-bbf51e7bfcdd-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.357473 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh78z\" (UniqueName: \"kubernetes.io/projected/a161c594-8af3-458f-911a-bbf51e7bfcdd-kube-api-access-vh78z\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.357548 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.357613 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a161c594-8af3-458f-911a-bbf51e7bfcdd-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.357835 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.357891 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a161c594-8af3-458f-911a-bbf51e7bfcdd-config-data\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.358030 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.460948 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.461017 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a161c594-8af3-458f-911a-bbf51e7bfcdd-config-data\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.461090 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.461346 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a161c594-8af3-458f-911a-bbf51e7bfcdd-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.461414 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.461447 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a161c594-8af3-458f-911a-bbf51e7bfcdd-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.461525 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh78z\" (UniqueName: \"kubernetes.io/projected/a161c594-8af3-458f-911a-bbf51e7bfcdd-kube-api-access-vh78z\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.461589 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.461669 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a161c594-8af3-458f-911a-bbf51e7bfcdd-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.467332 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a161c594-8af3-458f-911a-bbf51e7bfcdd-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.467436 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.469320 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a161c594-8af3-458f-911a-bbf51e7bfcdd-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.469631 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a161c594-8af3-458f-911a-bbf51e7bfcdd-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.469664 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a161c594-8af3-458f-911a-bbf51e7bfcdd-config-data\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.474093 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.480065 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.481372 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.487435 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh78z\" (UniqueName: \"kubernetes.io/projected/a161c594-8af3-458f-911a-bbf51e7bfcdd-kube-api-access-vh78z\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.506977 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " pod="openstack/tempest-tests-tempest" Feb 14 05:26:37 crc kubenswrapper[4867]: I0214 05:26:37.580051 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 14 05:26:38 crc kubenswrapper[4867]: I0214 05:26:38.372243 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 14 05:26:38 crc kubenswrapper[4867]: I0214 05:26:38.452539 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 05:26:38 crc kubenswrapper[4867]: I0214 05:26:38.906690 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a161c594-8af3-458f-911a-bbf51e7bfcdd","Type":"ContainerStarted","Data":"69a1559021e3c0afa3311c13a382b071b919ecabc5729024c716838afe1c709a"} Feb 14 05:26:41 crc kubenswrapper[4867]: I0214 05:26:41.996928 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:26:41 crc kubenswrapper[4867]: E0214 05:26:41.997818 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:26:57 crc kubenswrapper[4867]: I0214 05:26:56.999246 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:26:57 crc kubenswrapper[4867]: E0214 05:26:57.000772 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:27:09 crc kubenswrapper[4867]: I0214 05:27:09.998617 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:27:52 crc kubenswrapper[4867]: E0214 05:27:52.964077 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 14 05:27:52 crc kubenswrapper[4867]: E0214 05:27:52.970214 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vh78z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(a161c594-8af3-458f-911a-bbf51e7bfcdd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 05:27:52 crc kubenswrapper[4867]: E0214 05:27:52.971598 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="a161c594-8af3-458f-911a-bbf51e7bfcdd" Feb 14 05:27:53 crc kubenswrapper[4867]: I0214 05:27:53.853418 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"de23552d651bd266665fca3b2536d2046c3c2309b2c56fb5a66759067df0e4c8"} Feb 14 05:27:53 crc kubenswrapper[4867]: E0214 05:27:53.855888 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="a161c594-8af3-458f-911a-bbf51e7bfcdd" Feb 14 05:28:10 crc kubenswrapper[4867]: I0214 05:28:10.131644 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 14 05:28:13 crc kubenswrapper[4867]: I0214 05:28:13.097725 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a161c594-8af3-458f-911a-bbf51e7bfcdd","Type":"ContainerStarted","Data":"b1742179cf0672940dcd64c514227d7fd46e83cfc6502a0b57ebf7e4bf13678c"} Feb 14 05:28:13 crc kubenswrapper[4867]: I0214 05:28:13.125721 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=5.491671496 podStartE2EDuration="1m37.125695581s" podCreationTimestamp="2026-02-14 05:26:36 +0000 UTC" firstStartedPulling="2026-02-14 05:26:38.452216737 +0000 UTC m=+4630.533154071" lastFinishedPulling="2026-02-14 05:28:10.086240842 +0000 UTC m=+4722.167178156" observedRunningTime="2026-02-14 05:28:13.115282887 +0000 UTC m=+4725.196220201" watchObservedRunningTime="2026-02-14 05:28:13.125695581 +0000 UTC m=+4725.206632905" Feb 14 05:29:18 crc kubenswrapper[4867]: I0214 05:29:18.816062 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fwfld"] Feb 14 05:29:18 crc kubenswrapper[4867]: I0214 05:29:18.879669 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.049746 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg5kn\" (UniqueName: \"kubernetes.io/projected/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-kube-api-access-mg5kn\") pod \"certified-operators-fwfld\" (UID: \"09ba042e-98c3-43cc-aa6a-efbb9a63ae61\") " pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.050237 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-catalog-content\") pod \"certified-operators-fwfld\" (UID: \"09ba042e-98c3-43cc-aa6a-efbb9a63ae61\") " pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.050468 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-utilities\") pod \"certified-operators-fwfld\" (UID: \"09ba042e-98c3-43cc-aa6a-efbb9a63ae61\") " pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.074394 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9jj9q"] Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.077200 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.135093 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9jj9q"] Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.153472 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3532ff4a-374c-407b-b01c-b63267b0f9f9-utilities\") pod \"redhat-operators-9jj9q\" (UID: \"3532ff4a-374c-407b-b01c-b63267b0f9f9\") " pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.153880 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg5kn\" (UniqueName: \"kubernetes.io/projected/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-kube-api-access-mg5kn\") pod \"certified-operators-fwfld\" (UID: \"09ba042e-98c3-43cc-aa6a-efbb9a63ae61\") " pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.154036 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-catalog-content\") pod \"certified-operators-fwfld\" (UID: \"09ba042e-98c3-43cc-aa6a-efbb9a63ae61\") " pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.154089 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3532ff4a-374c-407b-b01c-b63267b0f9f9-catalog-content\") pod \"redhat-operators-9jj9q\" (UID: \"3532ff4a-374c-407b-b01c-b63267b0f9f9\") " pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.154220 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dcv6\" (UniqueName: \"kubernetes.io/projected/3532ff4a-374c-407b-b01c-b63267b0f9f9-kube-api-access-6dcv6\") pod \"redhat-operators-9jj9q\" (UID: \"3532ff4a-374c-407b-b01c-b63267b0f9f9\") " pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.154271 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-utilities\") pod \"certified-operators-fwfld\" (UID: \"09ba042e-98c3-43cc-aa6a-efbb9a63ae61\") " pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.221468 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fwfld"] Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.257091 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3532ff4a-374c-407b-b01c-b63267b0f9f9-catalog-content\") pod \"redhat-operators-9jj9q\" (UID: \"3532ff4a-374c-407b-b01c-b63267b0f9f9\") " pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.257240 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dcv6\" (UniqueName: \"kubernetes.io/projected/3532ff4a-374c-407b-b01c-b63267b0f9f9-kube-api-access-6dcv6\") pod \"redhat-operators-9jj9q\" (UID: \"3532ff4a-374c-407b-b01c-b63267b0f9f9\") " pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.257427 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3532ff4a-374c-407b-b01c-b63267b0f9f9-utilities\") pod \"redhat-operators-9jj9q\" (UID: \"3532ff4a-374c-407b-b01c-b63267b0f9f9\") " pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.261913 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-utilities\") pod \"certified-operators-fwfld\" (UID: \"09ba042e-98c3-43cc-aa6a-efbb9a63ae61\") " pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.352971 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-catalog-content\") pod \"certified-operators-fwfld\" (UID: \"09ba042e-98c3-43cc-aa6a-efbb9a63ae61\") " pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.357541 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3532ff4a-374c-407b-b01c-b63267b0f9f9-catalog-content\") pod \"redhat-operators-9jj9q\" (UID: \"3532ff4a-374c-407b-b01c-b63267b0f9f9\") " pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.357613 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3532ff4a-374c-407b-b01c-b63267b0f9f9-utilities\") pod \"redhat-operators-9jj9q\" (UID: \"3532ff4a-374c-407b-b01c-b63267b0f9f9\") " pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.370284 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dcv6\" (UniqueName: \"kubernetes.io/projected/3532ff4a-374c-407b-b01c-b63267b0f9f9-kube-api-access-6dcv6\") pod \"redhat-operators-9jj9q\" (UID: \"3532ff4a-374c-407b-b01c-b63267b0f9f9\") " pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.371262 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg5kn\" (UniqueName: \"kubernetes.io/projected/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-kube-api-access-mg5kn\") pod \"certified-operators-fwfld\" (UID: \"09ba042e-98c3-43cc-aa6a-efbb9a63ae61\") " pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.497561 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:29:19 crc kubenswrapper[4867]: I0214 05:29:19.544191 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:29:20 crc kubenswrapper[4867]: I0214 05:29:20.888714 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n4l4x"] Feb 14 05:29:20 crc kubenswrapper[4867]: I0214 05:29:20.892325 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:29:20 crc kubenswrapper[4867]: I0214 05:29:20.907595 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-catalog-content\") pod \"community-operators-n4l4x\" (UID: \"c07eb1e9-f4cc-4664-b9f6-80322fe0644a\") " pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:29:20 crc kubenswrapper[4867]: I0214 05:29:20.907941 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5kf9\" (UniqueName: \"kubernetes.io/projected/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-kube-api-access-p5kf9\") pod \"community-operators-n4l4x\" (UID: \"c07eb1e9-f4cc-4664-b9f6-80322fe0644a\") " pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:29:20 crc kubenswrapper[4867]: I0214 05:29:20.908035 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-utilities\") pod \"community-operators-n4l4x\" (UID: \"c07eb1e9-f4cc-4664-b9f6-80322fe0644a\") " pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:29:20 crc kubenswrapper[4867]: I0214 05:29:20.911003 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n4l4x"] Feb 14 05:29:21 crc kubenswrapper[4867]: I0214 05:29:21.011565 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5kf9\" (UniqueName: \"kubernetes.io/projected/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-kube-api-access-p5kf9\") pod \"community-operators-n4l4x\" (UID: \"c07eb1e9-f4cc-4664-b9f6-80322fe0644a\") " pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:29:21 crc kubenswrapper[4867]: I0214 05:29:21.011701 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-utilities\") pod \"community-operators-n4l4x\" (UID: \"c07eb1e9-f4cc-4664-b9f6-80322fe0644a\") " pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:29:21 crc kubenswrapper[4867]: I0214 05:29:21.012079 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-catalog-content\") pod \"community-operators-n4l4x\" (UID: \"c07eb1e9-f4cc-4664-b9f6-80322fe0644a\") " pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:29:21 crc kubenswrapper[4867]: I0214 05:29:21.024965 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-utilities\") pod \"community-operators-n4l4x\" (UID: \"c07eb1e9-f4cc-4664-b9f6-80322fe0644a\") " pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:29:21 crc kubenswrapper[4867]: I0214 05:29:21.027589 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-catalog-content\") pod \"community-operators-n4l4x\" (UID: \"c07eb1e9-f4cc-4664-b9f6-80322fe0644a\") " pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:29:21 crc kubenswrapper[4867]: I0214 05:29:21.048540 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5kf9\" (UniqueName: \"kubernetes.io/projected/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-kube-api-access-p5kf9\") pod \"community-operators-n4l4x\" (UID: \"c07eb1e9-f4cc-4664-b9f6-80322fe0644a\") " pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:29:21 crc kubenswrapper[4867]: I0214 05:29:21.239478 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:29:22 crc kubenswrapper[4867]: I0214 05:29:22.135267 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fwfld"] Feb 14 05:29:22 crc kubenswrapper[4867]: I0214 05:29:22.157547 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9jj9q"] Feb 14 05:29:22 crc kubenswrapper[4867]: I0214 05:29:22.331768 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n4l4x"] Feb 14 05:29:22 crc kubenswrapper[4867]: W0214 05:29:22.338176 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc07eb1e9_f4cc_4664_b9f6_80322fe0644a.slice/crio-5ba318c0f038dd00ef73874b614866123801539825c20b7ed97427c3db408ff8 WatchSource:0}: Error finding container 5ba318c0f038dd00ef73874b614866123801539825c20b7ed97427c3db408ff8: Status 404 returned error can't find the container with id 5ba318c0f038dd00ef73874b614866123801539825c20b7ed97427c3db408ff8 Feb 14 05:29:22 crc kubenswrapper[4867]: I0214 05:29:22.922452 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jj9q" event={"ID":"3532ff4a-374c-407b-b01c-b63267b0f9f9","Type":"ContainerDied","Data":"3ccc1ca8b5aa695fffe9a70b7b97042dbfab6774339fb2708f08dce70c3af3d0"} Feb 14 05:29:22 crc kubenswrapper[4867]: I0214 05:29:22.923198 4867 generic.go:334] "Generic (PLEG): container finished" podID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerID="3ccc1ca8b5aa695fffe9a70b7b97042dbfab6774339fb2708f08dce70c3af3d0" exitCode=0 Feb 14 05:29:22 crc kubenswrapper[4867]: I0214 05:29:22.926938 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jj9q" event={"ID":"3532ff4a-374c-407b-b01c-b63267b0f9f9","Type":"ContainerStarted","Data":"6b53ea8d4257c47786cd3a09e618ae66005b213cde9dca1141144554e272f271"} Feb 14 05:29:22 crc kubenswrapper[4867]: I0214 05:29:22.941017 4867 generic.go:334] "Generic (PLEG): container finished" podID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerID="5cb79fe74b93324d918674ab2692becf5fd9a155cfb9970da26b3cebb5355a9d" exitCode=0 Feb 14 05:29:22 crc kubenswrapper[4867]: I0214 05:29:22.941145 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwfld" event={"ID":"09ba042e-98c3-43cc-aa6a-efbb9a63ae61","Type":"ContainerDied","Data":"5cb79fe74b93324d918674ab2692becf5fd9a155cfb9970da26b3cebb5355a9d"} Feb 14 05:29:22 crc kubenswrapper[4867]: I0214 05:29:22.941184 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwfld" event={"ID":"09ba042e-98c3-43cc-aa6a-efbb9a63ae61","Type":"ContainerStarted","Data":"611fc79292fb2762358fe75567d94939459a2919b3fc494b0f725c85bd01c821"} Feb 14 05:29:22 crc kubenswrapper[4867]: I0214 05:29:22.954862 4867 generic.go:334] "Generic (PLEG): container finished" podID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerID="36bcae6bb363439549f24488fba4f5cff8ec4aa55cfcc0e02fab4feb7920c86f" exitCode=0 Feb 14 05:29:22 crc kubenswrapper[4867]: I0214 05:29:22.954909 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n4l4x" event={"ID":"c07eb1e9-f4cc-4664-b9f6-80322fe0644a","Type":"ContainerDied","Data":"36bcae6bb363439549f24488fba4f5cff8ec4aa55cfcc0e02fab4feb7920c86f"} Feb 14 05:29:22 crc kubenswrapper[4867]: I0214 05:29:22.954954 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n4l4x" event={"ID":"c07eb1e9-f4cc-4664-b9f6-80322fe0644a","Type":"ContainerStarted","Data":"5ba318c0f038dd00ef73874b614866123801539825c20b7ed97427c3db408ff8"} Feb 14 05:29:23 crc kubenswrapper[4867]: E0214 05:29:23.261190 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3532ff4a_374c_407b_b01c_b63267b0f9f9.slice/crio-3ccc1ca8b5aa695fffe9a70b7b97042dbfab6774339fb2708f08dce70c3af3d0.scope\": RecentStats: unable to find data in memory cache]" Feb 14 05:29:24 crc kubenswrapper[4867]: I0214 05:29:24.976886 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jj9q" event={"ID":"3532ff4a-374c-407b-b01c-b63267b0f9f9","Type":"ContainerStarted","Data":"b68d87e77e9726db128cb19314bb5165ed9c15cd0be74610a3fa6b601224ffbc"} Feb 14 05:29:24 crc kubenswrapper[4867]: I0214 05:29:24.979236 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwfld" event={"ID":"09ba042e-98c3-43cc-aa6a-efbb9a63ae61","Type":"ContainerStarted","Data":"e54727b5bf92a59032c5529b8aae9e9aaa32e613387911a5fa36f0cd61a385b3"} Feb 14 05:29:24 crc kubenswrapper[4867]: I0214 05:29:24.981461 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n4l4x" event={"ID":"c07eb1e9-f4cc-4664-b9f6-80322fe0644a","Type":"ContainerStarted","Data":"1a1f79e7d0e49fdf6f916b2defe58abde42138b8a7f873554959ac654f97cab7"} Feb 14 05:29:31 crc kubenswrapper[4867]: I0214 05:29:31.219439 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n4l4x" event={"ID":"c07eb1e9-f4cc-4664-b9f6-80322fe0644a","Type":"ContainerDied","Data":"1a1f79e7d0e49fdf6f916b2defe58abde42138b8a7f873554959ac654f97cab7"} Feb 14 05:29:31 crc kubenswrapper[4867]: I0214 05:29:31.220238 4867 generic.go:334] "Generic (PLEG): container finished" podID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerID="1a1f79e7d0e49fdf6f916b2defe58abde42138b8a7f873554959ac654f97cab7" exitCode=0 Feb 14 05:29:31 crc kubenswrapper[4867]: I0214 05:29:31.223711 4867 generic.go:334] "Generic (PLEG): container finished" podID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerID="e54727b5bf92a59032c5529b8aae9e9aaa32e613387911a5fa36f0cd61a385b3" exitCode=0 Feb 14 05:29:31 crc kubenswrapper[4867]: I0214 05:29:31.223769 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwfld" event={"ID":"09ba042e-98c3-43cc-aa6a-efbb9a63ae61","Type":"ContainerDied","Data":"e54727b5bf92a59032c5529b8aae9e9aaa32e613387911a5fa36f0cd61a385b3"} Feb 14 05:29:31 crc kubenswrapper[4867]: I0214 05:29:31.742491 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gbzmm"] Feb 14 05:29:31 crc kubenswrapper[4867]: I0214 05:29:31.749259 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:29:31 crc kubenswrapper[4867]: I0214 05:29:31.783363 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbzmm"] Feb 14 05:29:31 crc kubenswrapper[4867]: I0214 05:29:31.865600 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-696zs\" (UniqueName: \"kubernetes.io/projected/ae8a4292-e933-464b-b36d-918f43ce6f65-kube-api-access-696zs\") pod \"redhat-marketplace-gbzmm\" (UID: \"ae8a4292-e933-464b-b36d-918f43ce6f65\") " pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:29:31 crc kubenswrapper[4867]: I0214 05:29:31.866117 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae8a4292-e933-464b-b36d-918f43ce6f65-utilities\") pod \"redhat-marketplace-gbzmm\" (UID: \"ae8a4292-e933-464b-b36d-918f43ce6f65\") " pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:29:31 crc kubenswrapper[4867]: I0214 05:29:31.866288 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae8a4292-e933-464b-b36d-918f43ce6f65-catalog-content\") pod \"redhat-marketplace-gbzmm\" (UID: \"ae8a4292-e933-464b-b36d-918f43ce6f65\") " pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:29:31 crc kubenswrapper[4867]: I0214 05:29:31.969015 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-696zs\" (UniqueName: \"kubernetes.io/projected/ae8a4292-e933-464b-b36d-918f43ce6f65-kube-api-access-696zs\") pod \"redhat-marketplace-gbzmm\" (UID: \"ae8a4292-e933-464b-b36d-918f43ce6f65\") " pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:29:31 crc kubenswrapper[4867]: I0214 05:29:31.969184 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae8a4292-e933-464b-b36d-918f43ce6f65-utilities\") pod \"redhat-marketplace-gbzmm\" (UID: \"ae8a4292-e933-464b-b36d-918f43ce6f65\") " pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:29:31 crc kubenswrapper[4867]: I0214 05:29:31.969244 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae8a4292-e933-464b-b36d-918f43ce6f65-catalog-content\") pod \"redhat-marketplace-gbzmm\" (UID: \"ae8a4292-e933-464b-b36d-918f43ce6f65\") " pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:29:32 crc kubenswrapper[4867]: I0214 05:29:32.008362 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae8a4292-e933-464b-b36d-918f43ce6f65-catalog-content\") pod \"redhat-marketplace-gbzmm\" (UID: \"ae8a4292-e933-464b-b36d-918f43ce6f65\") " pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:29:32 crc kubenswrapper[4867]: I0214 05:29:32.029666 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae8a4292-e933-464b-b36d-918f43ce6f65-utilities\") pod \"redhat-marketplace-gbzmm\" (UID: \"ae8a4292-e933-464b-b36d-918f43ce6f65\") " pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:29:32 crc kubenswrapper[4867]: I0214 05:29:32.167869 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-696zs\" (UniqueName: \"kubernetes.io/projected/ae8a4292-e933-464b-b36d-918f43ce6f65-kube-api-access-696zs\") pod \"redhat-marketplace-gbzmm\" (UID: \"ae8a4292-e933-464b-b36d-918f43ce6f65\") " pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:29:32 crc kubenswrapper[4867]: I0214 05:29:32.266695 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwfld" event={"ID":"09ba042e-98c3-43cc-aa6a-efbb9a63ae61","Type":"ContainerStarted","Data":"d32a85def59446e2aea01a95f3cfe819170da1f9922b0c56fc8dbc92b574234c"} Feb 14 05:29:32 crc kubenswrapper[4867]: I0214 05:29:32.313230 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fwfld" podStartSLOduration=5.562013734 podStartE2EDuration="14.31075793s" podCreationTimestamp="2026-02-14 05:29:18 +0000 UTC" firstStartedPulling="2026-02-14 05:29:22.953538667 +0000 UTC m=+4795.034475981" lastFinishedPulling="2026-02-14 05:29:31.702282863 +0000 UTC m=+4803.783220177" observedRunningTime="2026-02-14 05:29:32.288102826 +0000 UTC m=+4804.369040140" watchObservedRunningTime="2026-02-14 05:29:32.31075793 +0000 UTC m=+4804.391695244" Feb 14 05:29:32 crc kubenswrapper[4867]: I0214 05:29:32.373113 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:29:33 crc kubenswrapper[4867]: I0214 05:29:33.280391 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n4l4x" event={"ID":"c07eb1e9-f4cc-4664-b9f6-80322fe0644a","Type":"ContainerStarted","Data":"87e6c040d38bde68e493b7b3302f19e7c726d33e11f4f16178f8c9d7adfc5f47"} Feb 14 05:29:35 crc kubenswrapper[4867]: I0214 05:29:35.161790 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n4l4x" podStartSLOduration=6.383599954 podStartE2EDuration="15.161765513s" podCreationTimestamp="2026-02-14 05:29:20 +0000 UTC" firstStartedPulling="2026-02-14 05:29:22.958104656 +0000 UTC m=+4795.039041970" lastFinishedPulling="2026-02-14 05:29:31.736270205 +0000 UTC m=+4803.817207529" observedRunningTime="2026-02-14 05:29:33.309491478 +0000 UTC m=+4805.390428802" watchObservedRunningTime="2026-02-14 05:29:35.161765513 +0000 UTC m=+4807.242702837" Feb 14 05:29:35 crc kubenswrapper[4867]: I0214 05:29:35.168759 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbzmm"] Feb 14 05:29:35 crc kubenswrapper[4867]: W0214 05:29:35.277592 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae8a4292_e933_464b_b36d_918f43ce6f65.slice/crio-47cdca75a2ba0f821663d76cef9b19a6564e32fa60be6d56b7f13820ba0f0910 WatchSource:0}: Error finding container 47cdca75a2ba0f821663d76cef9b19a6564e32fa60be6d56b7f13820ba0f0910: Status 404 returned error can't find the container with id 47cdca75a2ba0f821663d76cef9b19a6564e32fa60be6d56b7f13820ba0f0910 Feb 14 05:29:35 crc kubenswrapper[4867]: I0214 05:29:35.302697 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbzmm" event={"ID":"ae8a4292-e933-464b-b36d-918f43ce6f65","Type":"ContainerStarted","Data":"47cdca75a2ba0f821663d76cef9b19a6564e32fa60be6d56b7f13820ba0f0910"} Feb 14 05:29:36 crc kubenswrapper[4867]: I0214 05:29:36.315237 4867 generic.go:334] "Generic (PLEG): container finished" podID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerID="8c243a37aff3c02c559e404368152638ab794bc475ff69a09f55fcd9db332faf" exitCode=0 Feb 14 05:29:36 crc kubenswrapper[4867]: I0214 05:29:36.315346 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbzmm" event={"ID":"ae8a4292-e933-464b-b36d-918f43ce6f65","Type":"ContainerDied","Data":"8c243a37aff3c02c559e404368152638ab794bc475ff69a09f55fcd9db332faf"} Feb 14 05:29:37 crc kubenswrapper[4867]: I0214 05:29:37.334962 4867 generic.go:334] "Generic (PLEG): container finished" podID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerID="b68d87e77e9726db128cb19314bb5165ed9c15cd0be74610a3fa6b601224ffbc" exitCode=0 Feb 14 05:29:37 crc kubenswrapper[4867]: I0214 05:29:37.335063 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jj9q" event={"ID":"3532ff4a-374c-407b-b01c-b63267b0f9f9","Type":"ContainerDied","Data":"b68d87e77e9726db128cb19314bb5165ed9c15cd0be74610a3fa6b601224ffbc"} Feb 14 05:29:38 crc kubenswrapper[4867]: I0214 05:29:38.351011 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbzmm" event={"ID":"ae8a4292-e933-464b-b36d-918f43ce6f65","Type":"ContainerStarted","Data":"d8dba4d88b5c6eecbec89d7feae83ad9606443736a1880bc3a3ef22fc521b479"} Feb 14 05:29:39 crc kubenswrapper[4867]: I0214 05:29:39.499040 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:29:39 crc kubenswrapper[4867]: I0214 05:29:39.501686 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:29:39 crc kubenswrapper[4867]: I0214 05:29:39.708636 4867 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.238300255s: [/var/lib/containers/storage/overlay/548715b8e9244f4bf400b1cdd337ccd8a85917cae6e751f46636b49a47caba3a/diff /var/log/pods/openstack_openstackclient_6fdee887-8ecb-4c1e-8a88-0284fc050f0e/openstackclient/0.log]; will not log again for this container unless duration exceeds 2s Feb 14 05:29:40 crc kubenswrapper[4867]: I0214 05:29:40.375668 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jj9q" event={"ID":"3532ff4a-374c-407b-b01c-b63267b0f9f9","Type":"ContainerStarted","Data":"0af814f84e64b35babeb4457762bbfc3989cb29f290cec6370bec1b95e729f03"} Feb 14 05:29:40 crc kubenswrapper[4867]: I0214 05:29:40.403712 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9jj9q" podStartSLOduration=4.969712322 podStartE2EDuration="21.403682607s" podCreationTimestamp="2026-02-14 05:29:19 +0000 UTC" firstStartedPulling="2026-02-14 05:29:22.932755631 +0000 UTC m=+4795.013692945" lastFinishedPulling="2026-02-14 05:29:39.366725926 +0000 UTC m=+4811.447663230" observedRunningTime="2026-02-14 05:29:40.393757397 +0000 UTC m=+4812.474694711" watchObservedRunningTime="2026-02-14 05:29:40.403682607 +0000 UTC m=+4812.484619921" Feb 14 05:29:40 crc kubenswrapper[4867]: I0214 05:29:40.591194 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fwfld" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerName="registry-server" probeResult="failure" output=< Feb 14 05:29:40 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:29:40 crc kubenswrapper[4867]: > Feb 14 05:29:41 crc kubenswrapper[4867]: I0214 05:29:41.240663 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:29:41 crc kubenswrapper[4867]: I0214 05:29:41.241073 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:29:41 crc kubenswrapper[4867]: I0214 05:29:41.318781 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" podUID="634f9e2f-2100-49e3-a31f-a369cf8ff13f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:42 crc kubenswrapper[4867]: I0214 05:29:42.402887 4867 generic.go:334] "Generic (PLEG): container finished" podID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerID="d8dba4d88b5c6eecbec89d7feae83ad9606443736a1880bc3a3ef22fc521b479" exitCode=0 Feb 14 05:29:42 crc kubenswrapper[4867]: I0214 05:29:42.402996 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbzmm" event={"ID":"ae8a4292-e933-464b-b36d-918f43ce6f65","Type":"ContainerDied","Data":"d8dba4d88b5c6eecbec89d7feae83ad9606443736a1880bc3a3ef22fc521b479"} Feb 14 05:29:42 crc kubenswrapper[4867]: I0214 05:29:42.822543 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-n4l4x" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerName="registry-server" probeResult="failure" output=< Feb 14 05:29:42 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:29:42 crc kubenswrapper[4867]: > Feb 14 05:29:44 crc kubenswrapper[4867]: I0214 05:29:44.463959 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbzmm" event={"ID":"ae8a4292-e933-464b-b36d-918f43ce6f65","Type":"ContainerStarted","Data":"02fa8e73abcf51bd71a1c91f18d3c7a2d7323bb60e9dc8dc6f9f4004369b2287"} Feb 14 05:29:44 crc kubenswrapper[4867]: I0214 05:29:44.518482 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gbzmm" podStartSLOduration=6.810735796 podStartE2EDuration="13.518438122s" podCreationTimestamp="2026-02-14 05:29:31 +0000 UTC" firstStartedPulling="2026-02-14 05:29:36.317578644 +0000 UTC m=+4808.398515958" lastFinishedPulling="2026-02-14 05:29:43.02528097 +0000 UTC m=+4815.106218284" observedRunningTime="2026-02-14 05:29:44.51303331 +0000 UTC m=+4816.593970624" watchObservedRunningTime="2026-02-14 05:29:44.518438122 +0000 UTC m=+4816.599375446" Feb 14 05:29:49 crc kubenswrapper[4867]: I0214 05:29:49.545020 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:29:49 crc kubenswrapper[4867]: I0214 05:29:49.545583 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:29:50 crc kubenswrapper[4867]: I0214 05:29:50.941975 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fwfld" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerName="registry-server" probeResult="failure" output=< Feb 14 05:29:50 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:29:50 crc kubenswrapper[4867]: > Feb 14 05:29:50 crc kubenswrapper[4867]: I0214 05:29:50.941985 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jj9q" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" probeResult="failure" output=< Feb 14 05:29:50 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:29:50 crc kubenswrapper[4867]: > Feb 14 05:29:52 crc kubenswrapper[4867]: I0214 05:29:52.373716 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:29:52 crc kubenswrapper[4867]: I0214 05:29:52.375192 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:29:52 crc kubenswrapper[4867]: I0214 05:29:52.471922 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-n4l4x" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerName="registry-server" probeResult="failure" output=< Feb 14 05:29:52 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:29:52 crc kubenswrapper[4867]: > Feb 14 05:29:53 crc kubenswrapper[4867]: I0214 05:29:53.471309 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-nzdwg" podUID="cfde5532-97c7-47b8-8b63-0159fc9e82b9" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:53 crc kubenswrapper[4867]: I0214 05:29:53.840327 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-gbzmm" podUID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerName="registry-server" probeResult="failure" output=< Feb 14 05:29:53 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:29:53 crc kubenswrapper[4867]: > Feb 14 05:29:54 crc kubenswrapper[4867]: I0214 05:29:54.551425 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-mrccv" podUID="e0fe6db4-add0-4993-a40c-c5b6725565fa" containerName="registry-server" probeResult="failure" output=< Feb 14 05:29:54 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:29:54 crc kubenswrapper[4867]: > Feb 14 05:29:54 crc kubenswrapper[4867]: I0214 05:29:54.644407 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-mrccv" podUID="e0fe6db4-add0-4993-a40c-c5b6725565fa" containerName="registry-server" probeResult="failure" output=< Feb 14 05:29:54 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:29:54 crc kubenswrapper[4867]: > Feb 14 05:29:55 crc kubenswrapper[4867]: I0214 05:29:55.019978 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" podUID="dc65ca0c-1d72-468f-b600-dfb8332bf4bd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:55 crc kubenswrapper[4867]: I0214 05:29:55.020420 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" podUID="dc65ca0c-1d72-468f-b600-dfb8332bf4bd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:55 crc kubenswrapper[4867]: I0214 05:29:55.253307 4867 trace.go:236] Trace[1095630526]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-compactor-0" (14-Feb-2026 05:29:54.157) (total time: 1057ms): Feb 14 05:29:55 crc kubenswrapper[4867]: Trace[1095630526]: [1.057776266s] [1.057776266s] END Feb 14 05:29:55 crc kubenswrapper[4867]: I0214 05:29:55.830144 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-md7ts container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:55 crc kubenswrapper[4867]: I0214 05:29:55.834128 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" podUID="d28844dc-6974-446b-bd9a-b22586858387" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:55 crc kubenswrapper[4867]: I0214 05:29:55.846770 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-l82l4 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:55 crc kubenswrapper[4867]: I0214 05:29:55.846850 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" podUID="0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:56 crc kubenswrapper[4867]: I0214 05:29:56.687091 4867 patch_prober.go:28] interesting pod/console-796d588566-h9wcn container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.135:8443/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:56 crc kubenswrapper[4867]: I0214 05:29:56.687491 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-796d588566-h9wcn" podUID="41d35864-bb64-45f3-bc1e-a7d5440c35ad" containerName="console" probeResult="failure" output="Get \"https://10.217.0.135:8443/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:56 crc kubenswrapper[4867]: I0214 05:29:56.755153 4867 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:56 crc kubenswrapper[4867]: I0214 05:29:56.760221 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:57 crc kubenswrapper[4867]: I0214 05:29:57.449164 4867 patch_prober.go:28] interesting pod/metrics-server-76ddc659b-tzdtd container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.84:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:57 crc kubenswrapper[4867]: I0214 05:29:57.449706 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" podUID="652d53d9-a4c0-4061-b817-ca5173785521" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.84:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:57 crc kubenswrapper[4867]: I0214 05:29:57.640476 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-xlg4t" podUID="34f53dfe-4707-4a5c-8745-c4ed944c6a6a" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:57 crc kubenswrapper[4867]: I0214 05:29:57.831480 4867 patch_prober.go:28] interesting pod/monitoring-plugin-7f5858d95d-fvlxd container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.85:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:57 crc kubenswrapper[4867]: I0214 05:29:57.831563 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd" podUID="bcf2722f-8c1f-4061-8c4a-9888961c5361" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.85:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:57 crc kubenswrapper[4867]: I0214 05:29:57.885778 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" podUID="c83fa345-043f-453c-b797-a00db3111d44" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:58 crc kubenswrapper[4867]: I0214 05:29:58.166716 4867 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-p69vd container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:58 crc kubenswrapper[4867]: I0214 05:29:58.166785 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" podUID="553b1e39-c2d5-459d-a7fd-058f936804cb" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:58 crc kubenswrapper[4867]: I0214 05:29:58.871692 4867 patch_prober.go:28] interesting pod/console-operator-58897d9998-htv2n container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:58 crc kubenswrapper[4867]: I0214 05:29:58.871757 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-htv2n" podUID="dc723269-8ee6-4236-9eaa-169a00d76442" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:58 crc kubenswrapper[4867]: I0214 05:29:58.871815 4867 patch_prober.go:28] interesting pod/console-operator-58897d9998-htv2n container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:58 crc kubenswrapper[4867]: I0214 05:29:58.871832 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-htv2n" podUID="dc723269-8ee6-4236-9eaa-169a00d76442" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.036945 4867 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-s94ht container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.036996 4867 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-s94ht container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.037056 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" podUID="1b196c26-84a1-408f-913b-eb50572102cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.037004 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" podUID="1b196c26-84a1-408f-913b-eb50572102cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.122834 4867 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tcss9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.123294 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" podUID="46664b60-c0df-4869-9304-cec4de385a86" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.213744 4867 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tcss9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.213774 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.213813 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" podUID="46664b60-c0df-4869-9304-cec4de385a86" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.213853 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.213854 4867 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dgp2v container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.213912 4867 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-rv8cb container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.213893 4867 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dgp2v container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.213949 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" podUID="b1dba42c-e410-49fd-8c48-449fca5d65dc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.213975 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" podUID="a0c7654d-1553-4b68-8af4-253f77d7c657" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.214020 4867 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-rv8cb container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.214065 4867 patch_prober.go:28] interesting pod/thanos-querier-85586fc579-b75c7 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.213917 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" podUID="b1dba42c-e410-49fd-8c48-449fca5d65dc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.214194 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.214096 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" podUID="a0c7654d-1553-4b68-8af4-253f77d7c657" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.214229 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.214183 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" podUID="72801c86-0365-4e93-8887-4fdc6d8a9cad" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.399034 4867 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-72mpc container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.399043 4867 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-72mpc container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.399141 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" podUID="b967a9e8-e5f1-4c92-889a-1dd6adf747fd" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:29:59 crc kubenswrapper[4867]: I0214 05:29:59.399096 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" podUID="b967a9e8-e5f1-4c92-889a-1dd6adf747fd" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:00 crc kubenswrapper[4867]: I0214 05:30:00.341002 4867 patch_prober.go:28] interesting pod/controller-manager-574c444545-stzjc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.88:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:00 crc kubenswrapper[4867]: I0214 05:30:00.341097 4867 patch_prober.go:28] interesting pod/controller-manager-574c444545-stzjc container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.88:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:00 crc kubenswrapper[4867]: I0214 05:30:00.341604 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" podUID="a9fc9dc1-437a-4160-b805-fabfd7f877c2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.88:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:00 crc kubenswrapper[4867]: I0214 05:30:00.341517 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" podUID="a9fc9dc1-437a-4160-b805-fabfd7f877c2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.88:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:00 crc kubenswrapper[4867]: I0214 05:30:00.343521 4867 patch_prober.go:28] interesting pod/route-controller-manager-7575f7b945-9zbh8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.87:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:00 crc kubenswrapper[4867]: I0214 05:30:00.343576 4867 patch_prober.go:28] interesting pod/route-controller-manager-7575f7b945-9zbh8 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.87:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:00 crc kubenswrapper[4867]: I0214 05:30:00.343635 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" podUID="29172228-9eb8-461f-8f75-cdd021e0d30c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.87:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:00 crc kubenswrapper[4867]: I0214 05:30:00.343584 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" podUID="29172228-9eb8-461f-8f75-cdd021e0d30c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.87:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:00 crc kubenswrapper[4867]: I0214 05:30:00.508892 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" podUID="ebee5651-7233-4c18-bb97-a4dc91eabef4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:00 crc kubenswrapper[4867]: I0214 05:30:00.766361 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="b27199a8-11ac-4e59-90b8-b42387dd6dd2" containerName="galera" probeResult="failure" output="command timed out" Feb 14 05:30:00 crc kubenswrapper[4867]: I0214 05:30:00.766361 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="b27199a8-11ac-4e59-90b8-b42387dd6dd2" containerName="galera" probeResult="failure" output="command timed out" Feb 14 05:30:01 crc kubenswrapper[4867]: I0214 05:30:01.251198 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:30:01 crc kubenswrapper[4867]: I0214 05:30:01.251314 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:30:01 crc kubenswrapper[4867]: I0214 05:30:01.323861 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" podUID="634f9e2f-2100-49e3-a31f-a369cf8ff13f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:01 crc kubenswrapper[4867]: I0214 05:30:01.742815 4867 trace.go:236] Trace[1437508483]: "Calculate volume metrics of wal for pod openshift-logging/logging-loki-ingester-0" (14-Feb-2026 05:30:00.449) (total time: 1265ms): Feb 14 05:30:01 crc kubenswrapper[4867]: Trace[1437508483]: [1.265513279s] [1.265513279s] END Feb 14 05:30:01 crc kubenswrapper[4867]: I0214 05:30:01.742814 4867 trace.go:236] Trace[1387678168]: "Calculate volume metrics of mysql-db for pod openstack/openstack-galera-0" (14-Feb-2026 05:30:00.662) (total time: 1050ms): Feb 14 05:30:01 crc kubenswrapper[4867]: Trace[1387678168]: [1.05098864s] [1.05098864s] END Feb 14 05:30:01 crc kubenswrapper[4867]: I0214 05:30:01.760322 4867 patch_prober.go:28] interesting pod/oauth-openshift-79479887dd-9ltbt container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.75:6443/healthz\": context deadline exceeded" start-of-body= Feb 14 05:30:01 crc kubenswrapper[4867]: I0214 05:30:01.760394 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" podUID="351f0f21-497e-4c3e-99cc-30baff4e6484" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.75:6443/healthz\": context deadline exceeded" Feb 14 05:30:01 crc kubenswrapper[4867]: I0214 05:30:01.760339 4867 patch_prober.go:28] interesting pod/oauth-openshift-79479887dd-9ltbt container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.75:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:01 crc kubenswrapper[4867]: I0214 05:30:01.760571 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" podUID="351f0f21-497e-4c3e-99cc-30baff4e6484" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.75:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:02 crc kubenswrapper[4867]: I0214 05:30:02.249235 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fwfld" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:02 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:02 crc kubenswrapper[4867]: > Feb 14 05:30:02 crc kubenswrapper[4867]: I0214 05:30:02.249632 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jj9q" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:02 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:02 crc kubenswrapper[4867]: > Feb 14 05:30:02 crc kubenswrapper[4867]: I0214 05:30:02.298380 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-n4l4x" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:02 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:02 crc kubenswrapper[4867]: > Feb 14 05:30:02 crc kubenswrapper[4867]: I0214 05:30:02.439062 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" podUID="d5e9c930-96ca-4a35-af4f-b8ae033469a5" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.91:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:02 crc kubenswrapper[4867]: I0214 05:30:02.442685 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" podUID="d5e9c930-96ca-4a35-af4f-b8ae033469a5" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.91:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:02 crc kubenswrapper[4867]: I0214 05:30:02.929717 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" podUID="85e0628d-4132-4c09-9da0-35db43024c9c" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.93:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:02 crc kubenswrapper[4867]: I0214 05:30:02.929767 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" podUID="85e0628d-4132-4c09-9da0-35db43024c9c" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.93:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:03 crc kubenswrapper[4867]: I0214 05:30:03.081217 4867 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:03 crc kubenswrapper[4867]: I0214 05:30:03.081322 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:03 crc kubenswrapper[4867]: I0214 05:30:03.251854 4867 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-l8d7w container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:03 crc kubenswrapper[4867]: I0214 05:30:03.252029 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" podUID="d1f6fd76-f362-495f-969d-a644f072552f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:03 crc kubenswrapper[4867]: I0214 05:30:03.252128 4867 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-l8d7w container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:03 crc kubenswrapper[4867]: I0214 05:30:03.252161 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-l8d7w" podUID="d1f6fd76-f362-495f-969d-a644f072552f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:03 crc kubenswrapper[4867]: I0214 05:30:03.457688 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-nzdwg" podUID="cfde5532-97c7-47b8-8b63-0159fc9e82b9" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:03 crc kubenswrapper[4867]: I0214 05:30:03.640771 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-gbzmm" podUID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:03 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:03 crc kubenswrapper[4867]: > Feb 14 05:30:03 crc kubenswrapper[4867]: I0214 05:30:03.835388 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-29mb7" podUID="b4bb205c-0469-49a0-b783-9b51ae11ddfe" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:03 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:03 crc kubenswrapper[4867]: > Feb 14 05:30:03 crc kubenswrapper[4867]: I0214 05:30:03.836290 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-29mb7" podUID="b4bb205c-0469-49a0-b783-9b51ae11ddfe" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:03 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:03 crc kubenswrapper[4867]: > Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.069177 4867 patch_prober.go:28] interesting pod/nmstate-webhook-866bcb46dc-khbvf container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.73:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.069251 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf" podUID="fdb6e297-9da3-41ff-a6f3-de81833178c8" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.73:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.136476 4867 patch_prober.go:28] interesting pod/thanos-querier-85586fc579-b75c7 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.136686 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" podUID="72801c86-0365-4e93-8887-4fdc6d8a9cad" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.136519 4867 patch_prober.go:28] interesting pod/thanos-querier-85586fc579-b75c7 container/kube-rbac-proxy-web namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.82:9091/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.136813 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" podUID="72801c86-0365-4e93-8887-4fdc6d8a9cad" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.82:9091/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.407752 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8" podUID="10461723-ecff-48fe-a034-9a07bf3bf8f7" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.98:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.541199 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-mrccv" podUID="e0fe6db4-add0-4993-a40c-c5b6725565fa" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:04 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:04 crc kubenswrapper[4867]: > Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.639763 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-4hvw7" podUID="6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.639789 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl" podUID="3025ff58-4a91-43f5-8f15-94cadd0cef8b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.680687 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l" podUID="652d3b74-0634-4f8f-b5ef-3adfc53920eb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.100:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.680966 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-4hvw7" podUID="6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.681051 4867 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-7zdqp container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.681084 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d" podUID="66c8a0dd-f076-4994-bd42-39c80de83233" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.99:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.681113 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" podUID="c9201352-8585-47d4-9c13-b9e21ac4cd9f" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.50:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.742261 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-mrccv" podUID="e0fe6db4-add0-4993-a40c-c5b6725565fa" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:04 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:04 crc kubenswrapper[4867]: > Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.742373 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bgznq" podUID="4b75df5b-04e5-445f-8d2d-57c6cbe5971c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.883234 4867 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-5td7f container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.883301 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" podUID="9c48c070-b4b3-48af-b40a-d82788f764d9" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.942093 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-w69fq" podUID="be125812-eeef-4043-bef9-fea01037dddb" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:04 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:04 crc kubenswrapper[4867]: > Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.946909 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-w69fq" podUID="be125812-eeef-4043-bef9-fea01037dddb" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:04 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:04 crc kubenswrapper[4867]: > Feb 14 05:30:04 crc kubenswrapper[4867]: I0214 05:30:04.982877 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" podUID="dc65ca0c-1d72-468f-b600-dfb8332bf4bd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.070472 4867 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-cfcbp container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.070586 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" podUID="837b4fe4-f827-4882-8af7-225b18bb3e22" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.188717 4867 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-kv4j7 container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.25:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.188760 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" podUID="94ff35ef-77e1-4085-ad2f-837ebc666b2a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.189098 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" podUID="94f47db9-4437-4b3e-aee5-f6f65e715e62" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.25:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.188756 4867 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-kv4j7 container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.25:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.189156 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" podUID="94f47db9-4437-4b3e-aee5-f6f65e715e62" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.25:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.289697 4867 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-7qfh9 container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.34:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.289736 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m" podUID="7bb6de63-3c92-43de-a01b-b34df765aeba" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.289976 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" podUID="31f03187-50f6-4015-afdc-422455a63006" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.34:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.391629 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg" podUID="74a43e5b-11c4-459d-bbc7-03aa03489f17" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.579694 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" podUID="64ff8480-2ca0-40d5-b5c9-448d0db3c575" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.751690 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-68f46476f-snrw6" podUID="bc4bb4fd-bcc8-438b-af84-a2db3d3e346a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.757789 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="62ee3130-2952-453e-82b6-dba068ba1bc9" containerName="prometheus" probeResult="failure" output="command timed out" Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.758282 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="62ee3130-2952-453e-82b6-dba068ba1bc9" containerName="prometheus" probeResult="failure" output="command timed out" Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.822744 4867 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.822796 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="775ca902-fd03-4191-9440-ea598768d4e6" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.828792 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-md7ts container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.828857 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" podUID="d28844dc-6974-446b-bd9a-b22586858387" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.846826 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-l82l4 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:05 crc kubenswrapper[4867]: I0214 05:30:05.846930 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" podUID="0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:06 crc kubenswrapper[4867]: I0214 05:30:06.025698 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56" podUID="d72a97fb-2a6a-4af1-8f0c-de88ab679119" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:06 crc kubenswrapper[4867]: I0214 05:30:06.025703 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz" podUID="9ec66be5-3947-45d1-bf34-c7639e8d4c8a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:06 crc kubenswrapper[4867]: I0214 05:30:06.066728 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-7866795846-t7hwz" podUID="67e3f2b9-2dbf-4c35-b1cd-02be51f58e38" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:06 crc kubenswrapper[4867]: I0214 05:30:06.370547 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="8c8003cd-8992-4714-96a2-2e649aead118" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.166:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:06 crc kubenswrapper[4867]: I0214 05:30:06.370811 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="8c8003cd-8992-4714-96a2-2e649aead118" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.166:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:06 crc kubenswrapper[4867]: I0214 05:30:06.687926 4867 patch_prober.go:28] interesting pod/console-796d588566-h9wcn container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.135:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:06 crc kubenswrapper[4867]: I0214 05:30:06.687983 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-796d588566-h9wcn" podUID="41d35864-bb64-45f3-bc1e-a7d5440c35ad" containerName="console" probeResult="failure" output="Get \"https://10.217.0.135:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:06 crc kubenswrapper[4867]: I0214 05:30:06.754237 4867 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:06 crc kubenswrapper[4867]: I0214 05:30:06.754315 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:07 crc kubenswrapper[4867]: I0214 05:30:07.457663 4867 patch_prober.go:28] interesting pod/metrics-server-76ddc659b-tzdtd container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.84:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:07 crc kubenswrapper[4867]: I0214 05:30:07.457747 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" podUID="652d53d9-a4c0-4061-b817-ca5173785521" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.84:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:07 crc kubenswrapper[4867]: I0214 05:30:07.457853 4867 patch_prober.go:28] interesting pod/metrics-server-76ddc659b-tzdtd container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.84:10250/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:07 crc kubenswrapper[4867]: I0214 05:30:07.457959 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" podUID="652d53d9-a4c0-4061-b817-ca5173785521" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.84:10250/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:07 crc kubenswrapper[4867]: I0214 05:30:07.536993 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-xlg4t" podUID="34f53dfe-4707-4a5c-8745-c4ed944c6a6a" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:07 crc kubenswrapper[4867]: I0214 05:30:07.831143 4867 patch_prober.go:28] interesting pod/monitoring-plugin-7f5858d95d-fvlxd container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.85:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:07 crc kubenswrapper[4867]: I0214 05:30:07.831269 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd" podUID="bcf2722f-8c1f-4061-8c4a-9888961c5361" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.85:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:07 crc kubenswrapper[4867]: I0214 05:30:07.925690 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" podUID="c83fa345-043f-453c-b797-a00db3111d44" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:07 crc kubenswrapper[4867]: I0214 05:30:07.925689 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" podUID="c83fa345-043f-453c-b797-a00db3111d44" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:08 crc kubenswrapper[4867]: I0214 05:30:08.123975 4867 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-p69vd container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:08 crc kubenswrapper[4867]: I0214 05:30:08.124058 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" podUID="553b1e39-c2d5-459d-a7fd-058f936804cb" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:08 crc kubenswrapper[4867]: I0214 05:30:08.868702 4867 patch_prober.go:28] interesting pod/console-operator-58897d9998-htv2n container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:08 crc kubenswrapper[4867]: I0214 05:30:08.869076 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-htv2n" podUID="dc723269-8ee6-4236-9eaa-169a00d76442" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:08 crc kubenswrapper[4867]: I0214 05:30:08.868708 4867 patch_prober.go:28] interesting pod/console-operator-58897d9998-htv2n container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:08 crc kubenswrapper[4867]: I0214 05:30:08.869146 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-htv2n" podUID="dc723269-8ee6-4236-9eaa-169a00d76442" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.038619 4867 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-s94ht container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.038678 4867 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-s94ht container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.038694 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" podUID="1b196c26-84a1-408f-913b-eb50572102cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.038734 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" podUID="1b196c26-84a1-408f-913b-eb50572102cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.043991 4867 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tcss9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.044045 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" podUID="46664b60-c0df-4869-9304-cec4de385a86" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.044084 4867 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tcss9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.044134 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" podUID="46664b60-c0df-4869-9304-cec4de385a86" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.098711 4867 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-rv8cb container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.098796 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" podUID="a0c7654d-1553-4b68-8af4-253f77d7c657" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.136807 4867 patch_prober.go:28] interesting pod/thanos-querier-85586fc579-b75c7 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.136873 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" podUID="72801c86-0365-4e93-8887-4fdc6d8a9cad" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.139674 4867 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-rv8cb container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.139733 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" podUID="a0c7654d-1553-4b68-8af4-253f77d7c657" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.221744 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.221753 4867 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dgp2v container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.222023 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.221884 4867 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dgp2v container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.222070 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" podUID="b1dba42c-e410-49fd-8c48-449fca5d65dc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.222095 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.222174 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" podUID="b1dba42c-e410-49fd-8c48-449fca5d65dc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.222236 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.399683 4867 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-72mpc container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.399711 4867 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-72mpc container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.399845 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" podUID="b967a9e8-e5f1-4c92-889a-1dd6adf747fd" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.399760 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" podUID="b967a9e8-e5f1-4c92-889a-1dd6adf747fd" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:09 crc kubenswrapper[4867]: I0214 05:30:09.780410 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="27437fd9-2bc5-48ac-9e34-e733da15dd2b" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.340202 4867 patch_prober.go:28] interesting pod/controller-manager-574c444545-stzjc container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.88:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.340527 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" podUID="a9fc9dc1-437a-4160-b805-fabfd7f877c2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.88:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.340276 4867 patch_prober.go:28] interesting pod/controller-manager-574c444545-stzjc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.88:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.340590 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" podUID="a9fc9dc1-437a-4160-b805-fabfd7f877c2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.88:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.344648 4867 patch_prober.go:28] interesting pod/route-controller-manager-7575f7b945-9zbh8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.87:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.344715 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" podUID="29172228-9eb8-461f-8f75-cdd021e0d30c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.87:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.344777 4867 patch_prober.go:28] interesting pod/route-controller-manager-7575f7b945-9zbh8 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.87:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.344795 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" podUID="29172228-9eb8-461f-8f75-cdd021e0d30c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.87:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.549741 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" podUID="ebee5651-7233-4c18-bb97-a4dc91eabef4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.549866 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" podUID="ebee5651-7233-4c18-bb97-a4dc91eabef4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.758754 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="b27199a8-11ac-4e59-90b8-b42387dd6dd2" containerName="galera" probeResult="failure" output="command timed out" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.758848 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="b27199a8-11ac-4e59-90b8-b42387dd6dd2" containerName="galera" probeResult="failure" output="command timed out" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.759700 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="62ee3130-2952-453e-82b6-dba068ba1bc9" containerName="prometheus" probeResult="failure" output="command timed out" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.760545 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="62ee3130-2952-453e-82b6-dba068ba1bc9" containerName="prometheus" probeResult="failure" output="command timed out" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.761975 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-gbz8c" podUID="c8fe62eb-932d-4b17-8ffa-6c90780bdd74" containerName="registry-server" probeResult="failure" output="command timed out" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.762047 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-gbz8c" podUID="c8fe62eb-932d-4b17-8ffa-6c90780bdd74" containerName="registry-server" probeResult="failure" output="command timed out" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.785701 4867 patch_prober.go:28] interesting pod/loki-operator-controller-manager-5479889c99-ltnxf container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.47:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.785772 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" podUID="4a918644-d451-4f71-8a69-627b0de1ebb7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.47:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.817225 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="89e70483-d3e8-4758-bb61-ae6147dd4f39" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.9:8081/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.819378 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="89e70483-d3e8-4758-bb61-ae6147dd4f39" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.9:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.828558 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-md7ts container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.828594 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-md7ts container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.828648 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" podUID="d28844dc-6974-446b-bd9a-b22586858387" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.828654 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" podUID="d28844dc-6974-446b-bd9a-b22586858387" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.846260 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-l82l4 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8081/ready\": context deadline exceeded" start-of-body= Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.846343 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" podUID="0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.53:8081/ready\": context deadline exceeded" Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.846405 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-l82l4 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:10 crc kubenswrapper[4867]: I0214 05:30:10.846472 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" podUID="0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:11 crc kubenswrapper[4867]: I0214 05:30:11.029169 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-bvb8v" podUID="140d0152-99c5-425c-b956-595dea337206" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:11 crc kubenswrapper[4867]: timeout: health rpc did not complete within 1s Feb 14 05:30:11 crc kubenswrapper[4867]: > Feb 14 05:30:11 crc kubenswrapper[4867]: I0214 05:30:11.029365 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-bvb8v" podUID="140d0152-99c5-425c-b956-595dea337206" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:11 crc kubenswrapper[4867]: timeout: health rpc did not complete within 1s Feb 14 05:30:11 crc kubenswrapper[4867]: > Feb 14 05:30:11 crc kubenswrapper[4867]: I0214 05:30:11.146068 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fwfld" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:11 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:11 crc kubenswrapper[4867]: > Feb 14 05:30:11 crc kubenswrapper[4867]: I0214 05:30:11.148250 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jj9q" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:11 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:11 crc kubenswrapper[4867]: > Feb 14 05:30:11 crc kubenswrapper[4867]: I0214 05:30:11.364711 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" podUID="634f9e2f-2100-49e3-a31f-a369cf8ff13f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:11 crc kubenswrapper[4867]: I0214 05:30:11.364950 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" podUID="634f9e2f-2100-49e3-a31f-a369cf8ff13f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:11 crc kubenswrapper[4867]: I0214 05:30:11.370605 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="8c8003cd-8992-4714-96a2-2e649aead118" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.166:9090/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:11 crc kubenswrapper[4867]: I0214 05:30:11.370681 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="8c8003cd-8992-4714-96a2-2e649aead118" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.166:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:11 crc kubenswrapper[4867]: I0214 05:30:11.792733 4867 patch_prober.go:28] interesting pod/oauth-openshift-79479887dd-9ltbt container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.75:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:11 crc kubenswrapper[4867]: I0214 05:30:11.792789 4867 patch_prober.go:28] interesting pod/oauth-openshift-79479887dd-9ltbt container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.75:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:11 crc kubenswrapper[4867]: I0214 05:30:11.792800 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" podUID="351f0f21-497e-4c3e-99cc-30baff4e6484" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.75:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:11 crc kubenswrapper[4867]: I0214 05:30:11.792828 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" podUID="351f0f21-497e-4c3e-99cc-30baff4e6484" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.75:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:11 crc kubenswrapper[4867]: I0214 05:30:11.792758 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" podUID="e1d5f0bd-4e8c-45c7-9d4e-c530689948ad" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:11 crc kubenswrapper[4867]: I0214 05:30:11.909289 4867 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-jsc7b container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:11 crc kubenswrapper[4867]: I0214 05:30:11.909350 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" podUID="d58c6e7c-e0bc-4833-ab34-348c03f75da7" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:12 crc kubenswrapper[4867]: I0214 05:30:12.419891 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" podUID="d5e9c930-96ca-4a35-af4f-b8ae033469a5" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.91:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:12 crc kubenswrapper[4867]: I0214 05:30:12.420154 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" podUID="d5e9c930-96ca-4a35-af4f-b8ae033469a5" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.91:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:12 crc kubenswrapper[4867]: I0214 05:30:12.759567 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="505de461-9e6f-4914-bf50-e2bf4149b566" containerName="galera" probeResult="failure" output="command timed out" Feb 14 05:30:12 crc kubenswrapper[4867]: I0214 05:30:12.759807 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="505de461-9e6f-4914-bf50-e2bf4149b566" containerName="galera" probeResult="failure" output="command timed out" Feb 14 05:30:12 crc kubenswrapper[4867]: I0214 05:30:12.927671 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" podUID="85e0628d-4132-4c09-9da0-35db43024c9c" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.93:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:12 crc kubenswrapper[4867]: I0214 05:30:12.927761 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" podUID="85e0628d-4132-4c09-9da0-35db43024c9c" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.93:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:13 crc kubenswrapper[4867]: I0214 05:30:13.081679 4867 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:13 crc kubenswrapper[4867]: I0214 05:30:13.082017 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:13 crc kubenswrapper[4867]: I0214 05:30:13.538711 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-nzdwg" podUID="cfde5532-97c7-47b8-8b63-0159fc9e82b9" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:13 crc kubenswrapper[4867]: I0214 05:30:13.538838 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-nzdwg" podUID="cfde5532-97c7-47b8-8b63-0159fc9e82b9" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:13 crc kubenswrapper[4867]: I0214 05:30:13.539224 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-nzdwg" podUID="cfde5532-97c7-47b8-8b63-0159fc9e82b9" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:13 crc kubenswrapper[4867]: I0214 05:30:13.560688 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-nzdwg" Feb 14 05:30:13 crc kubenswrapper[4867]: I0214 05:30:13.574729 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr" containerStatusID={"Type":"cri-o","ID":"a607ea132c1aa0b9d6c68c3601ae04a26220cd55eee8e095594f2aace6ecac5a"} pod="metallb-system/frr-k8s-nzdwg" containerMessage="Container frr failed liveness probe, will be restarted" Feb 14 05:30:13 crc kubenswrapper[4867]: I0214 05:30:13.601487 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-nzdwg" podUID="cfde5532-97c7-47b8-8b63-0159fc9e82b9" containerName="frr" containerID="cri-o://a607ea132c1aa0b9d6c68c3601ae04a26220cd55eee8e095594f2aace6ecac5a" gracePeriod=2 Feb 14 05:30:13 crc kubenswrapper[4867]: I0214 05:30:13.666866 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-69bbfbf88f-zhmxc" podUID="516cf204-1263-431e-a450-039739b0d925" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.94:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:13 crc kubenswrapper[4867]: I0214 05:30:13.667009 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-69bbfbf88f-zhmxc" podUID="516cf204-1263-431e-a450-039739b0d925" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.94:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:13 crc kubenswrapper[4867]: I0214 05:30:13.857698 4867 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-p82xp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.60:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:13 crc kubenswrapper[4867]: I0214 05:30:13.857824 4867 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-p82xp container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.60:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:13 crc kubenswrapper[4867]: I0214 05:30:13.858318 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" podUID="33b576d8-f768-4fd2-895d-7d4ababe8714" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.60:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:13 crc kubenswrapper[4867]: I0214 05:30:13.859298 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-p82xp" podUID="33b576d8-f768-4fd2-895d-7d4ababe8714" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.60:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:14 crc kubenswrapper[4867]: I0214 05:30:14.070774 4867 patch_prober.go:28] interesting pod/nmstate-webhook-866bcb46dc-khbvf container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.73:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:14 crc kubenswrapper[4867]: I0214 05:30:14.070847 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-khbvf" podUID="fdb6e297-9da3-41ff-a6f3-de81833178c8" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.73:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:14 crc kubenswrapper[4867]: I0214 05:30:14.136308 4867 patch_prober.go:28] interesting pod/thanos-querier-85586fc579-b75c7 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:14 crc kubenswrapper[4867]: I0214 05:30:14.136366 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" podUID="72801c86-0365-4e93-8887-4fdc6d8a9cad" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:14 crc kubenswrapper[4867]: I0214 05:30:14.449752 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8" podUID="10461723-ecff-48fe-a034-9a07bf3bf8f7" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.98:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:14 crc kubenswrapper[4867]: I0214 05:30:14.449895 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8" podUID="10461723-ecff-48fe-a034-9a07bf3bf8f7" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.98:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:14 crc kubenswrapper[4867]: I0214 05:30:14.720745 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-4hvw7" podUID="6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:14 crc kubenswrapper[4867]: I0214 05:30:14.802747 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l" podUID="652d3b74-0634-4f8f-b5ef-3adfc53920eb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.100:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:14 crc kubenswrapper[4867]: I0214 05:30:14.802852 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="27437fd9-2bc5-48ac-9e34-e733da15dd2b" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:14.852721 4867 trace.go:236] Trace[276298581]: "Calculate volume metrics of mysql-db for pod openstack/openstack-cell1-galera-0" (14-Feb-2026 05:30:09.453) (total time: 5380ms): Feb 14 05:30:15 crc kubenswrapper[4867]: Trace[276298581]: [5.380506222s] [5.380506222s] END Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:14.852722 4867 trace.go:236] Trace[1520390512]: "Calculate volume metrics of registry-storage for pod openshift-image-registry/image-registry-66df7c8f76-wwh9m" (14-Feb-2026 05:30:10.298) (total time: 4527ms): Feb 14 05:30:15 crc kubenswrapper[4867]: Trace[1520390512]: [4.527796865s] [4.527796865s] END Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:14.856535 4867 trace.go:236] Trace[1768292916]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-0" (14-Feb-2026 05:30:09.646) (total time: 5180ms): Feb 14 05:30:15 crc kubenswrapper[4867]: Trace[1768292916]: [5.180510583s] [5.180510583s] END Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:14.884697 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-4hvw7" podUID="6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:14.884716 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn" podUID="1f889f7b-8ae5-43e3-ab54-d3bf06c010df" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:14.884785 4867 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-7zdqp container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.50:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:14.884874 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" podUID="c9201352-8585-47d4-9c13-b9e21ac4cd9f" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.50:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:14.884894 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d" podUID="66c8a0dd-f076-4994-bd42-39c80de83233" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.99:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:14.884948 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-pxm8d" podUID="66c8a0dd-f076-4994-bd42-39c80de83233" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.99:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:14.885102 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl" podUID="3025ff58-4a91-43f5-8f15-94cadd0cef8b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:14.966688 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2" podUID="185d4fd5-608b-48d8-8731-27e7a05adfe2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.007080 4867 generic.go:334] "Generic (PLEG): container finished" podID="cfde5532-97c7-47b8-8b63-0159fc9e82b9" containerID="a607ea132c1aa0b9d6c68c3601ae04a26220cd55eee8e095594f2aace6ecac5a" exitCode=143 Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.014155 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nzdwg" event={"ID":"cfde5532-97c7-47b8-8b63-0159fc9e82b9","Type":"ContainerDied","Data":"a607ea132c1aa0b9d6c68c3601ae04a26220cd55eee8e095594f2aace6ecac5a"} Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.049749 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bgznq" podUID="4b75df5b-04e5-445f-8d2d-57c6cbe5971c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.049868 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-chbgl" podUID="3025ff58-4a91-43f5-8f15-94cadd0cef8b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.049875 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ndb8l" podUID="652d3b74-0634-4f8f-b5ef-3adfc53920eb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.100:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.050167 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tpfxn" podUID="1f889f7b-8ae5-43e3-ab54-d3bf06c010df" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.050367 4867 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-5td7f container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.050426 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" podUID="9c48c070-b4b3-48af-b40a-d82788f764d9" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.51:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.050519 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-jxpv2" podUID="185d4fd5-608b-48d8-8731-27e7a05adfe2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.091815 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" podUID="dc65ca0c-1d72-468f-b600-dfb8332bf4bd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.132751 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" podUID="dc65ca0c-1d72-468f-b600-dfb8332bf4bd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.132862 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.133299 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-bgznq" podUID="4b75df5b-04e5-445f-8d2d-57c6cbe5971c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.173686 4867 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-kv4j7 container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.25:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.173683 4867 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-cfcbp container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.173750 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" podUID="94f47db9-4437-4b3e-aee5-f6f65e715e62" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.25:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.173784 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" podUID="837b4fe4-f827-4882-8af7-225b18bb3e22" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.297697 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" podUID="94ff35ef-77e1-4085-ad2f-837ebc666b2a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.380899 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp" podUID="6b5078d9-f30f-40a8-b5b5-8eb11271ec10" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.463709 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd" podUID="38a9cdf3-42e2-4279-8092-af7e8c82bc51" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.628892 4867 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-7qfh9 container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.34:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.628929 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" podUID="94ff35ef-77e1-4085-ad2f-837ebc666b2a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.628956 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" podUID="31f03187-50f6-4015-afdc-422455a63006" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.34:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.628897 4867 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-kv4j7 container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.25:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.629010 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-kv4j7" podUID="94f47db9-4437-4b3e-aee5-f6f65e715e62" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.25:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.629064 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-8dzwp" podUID="6b5078d9-f30f-40a8-b5b5-8eb11271ec10" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.629384 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-tf6rg" podUID="74a43e5b-11c4-459d-bbc7-03aa03489f17" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.629479 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-2xwdd" podUID="38a9cdf3-42e2-4279-8092-af7e8c82bc51" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.629663 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m" podUID="7bb6de63-3c92-43de-a01b-b34df765aeba" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.711718 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-wwm9m" podUID="7bb6de63-3c92-43de-a01b-b34df765aeba" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.752733 4867 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-7zdqp container/loki-distributor namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.50:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.752781 4867 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-7qfh9 container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.34:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.752795 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-7zdqp" podUID="c9201352-8585-47d4-9c13-b9e21ac4cd9f" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.50:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.752817 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-7qfh9" podUID="31f03187-50f6-4015-afdc-422455a63006" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.34:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.752732 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" podUID="64ff8480-2ca0-40d5-b5c9-448d0db3c575" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.761303 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="62ee3130-2952-453e-82b6-dba068ba1bc9" containerName="prometheus" probeResult="failure" output="command timed out" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.761312 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="62ee3130-2952-453e-82b6-dba068ba1bc9" containerName="prometheus" probeResult="failure" output="command timed out" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.761436 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.830815 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" podUID="64ff8480-2ca0-40d5-b5c9-448d0db3c575" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.830840 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dszdp" podUID="ffb00aaf-6760-440e-827a-f795baf3693a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.872801 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-68f46476f-snrw6" podUID="bc4bb4fd-bcc8-438b-af84-a2db3d3e346a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.873028 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-68f46476f-snrw6" podUID="bc4bb4fd-bcc8-438b-af84-a2db3d3e346a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.873129 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-dszdp" podUID="ffb00aaf-6760-440e-827a-f795baf3693a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.873367 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-md7ts container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.873392 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" podUID="d28844dc-6974-446b-bd9a-b22586858387" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.873426 4867 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.873440 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="775ca902-fd03-4191-9440-ea598768d4e6" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.873464 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-l82l4 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.873478 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" podUID="0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.884836 4867 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-5td7f container/loki-querier namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.51:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:15 crc kubenswrapper[4867]: I0214 05:30:15.884931 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-5td7f" podUID="9c48c070-b4b3-48af-b40a-d82788f764d9" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.51:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.071067 4867 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-cfcbp container/loki-query-frontend namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.071141 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-cfcbp" podUID="837b4fe4-f827-4882-8af7-225b18bb3e22" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.105575 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz" podUID="9ec66be5-3947-45d1-bf34-c7639e8d4c8a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.105675 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56" podUID="d72a97fb-2a6a-4af1-8f0c-de88ab679119" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.196747 4867 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.197167 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="6975f95f-884b-4952-8bf8-0d18537e3403" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.56:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.272843 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj" podUID="82e5dbee-ab1e-498c-9460-be75226afa18" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.272892 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-55dcdcc8d-49t56" podUID="d72a97fb-2a6a-4af1-8f0c-de88ab679119" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.272853 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-vwvtz" podUID="9ec66be5-3947-45d1-bf34-c7639e8d4c8a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.313807 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" podUID="dc65ca0c-1d72-468f-b600-dfb8332bf4bd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.313807 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-7866795846-t7hwz" podUID="67e3f2b9-2dbf-4c35-b1cd-02be51f58e38" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.314220 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-7866795846-t7hwz" podUID="67e3f2b9-2dbf-4c35-b1cd-02be51f58e38" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.314329 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-6d9jj" podUID="82e5dbee-ab1e-498c-9460-be75226afa18" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.371385 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="8c8003cd-8992-4714-96a2-2e649aead118" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.166:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.371404 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="8c8003cd-8992-4714-96a2-2e649aead118" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.166:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.371654 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.398690 4867 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.398822 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="3c3333e0-ec4e-41bf-8296-9469ad3ac9cd" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.57:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.686812 4867 patch_prober.go:28] interesting pod/console-796d588566-h9wcn container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.135:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.686920 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-796d588566-h9wcn" podUID="41d35864-bb64-45f3-bc1e-a7d5440c35ad" containerName="console" probeResult="failure" output="Get \"https://10.217.0.135:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.687059 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-796d588566-h9wcn" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.754741 4867 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.754816 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.754912 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.763932 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-n4l4x" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerName="registry-server" probeResult="failure" output="command timed out" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.764883 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-29mb7" podUID="b4bb205c-0469-49a0-b783-9b51ae11ddfe" containerName="registry-server" probeResult="failure" output="command timed out" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.765152 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-29mb7" podUID="b4bb205c-0469-49a0-b783-9b51ae11ddfe" containerName="registry-server" probeResult="failure" output="command timed out" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.833662 4867 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.833734 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-ingester-0" podUID="775ca902-fd03-4191-9440-ea598768d4e6" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.836048 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-md7ts container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.54:8083/live\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.836093 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" podUID="d28844dc-6974-446b-bd9a-b22586858387" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/live\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.839289 4867 trace.go:236] Trace[772054449]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-index-gateway-0" (14-Feb-2026 05:30:14.235) (total time: 2603ms): Feb 14 05:30:16 crc kubenswrapper[4867]: Trace[772054449]: [2.603724734s] [2.603724734s] END Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.845340 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-l82l4 container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.53:8083/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:16 crc kubenswrapper[4867]: I0214 05:30:16.845410 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" podUID="0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.53:8083/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.196072 4867 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.196117 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-compactor-0" podUID="6975f95f-884b-4952-8bf8-0d18537e3403" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.458452 4867 patch_prober.go:28] interesting pod/metrics-server-76ddc659b-tzdtd container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.84:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.458773 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" podUID="652d53d9-a4c0-4061-b817-ca5173785521" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.84:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.463015 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.463920 4867 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.57:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.463986 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="3c3333e0-ec4e-41bf-8296-9469ad3ac9cd" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.57:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.468067 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="metrics-server" containerStatusID={"Type":"cri-o","ID":"075b79918bc2f91b3a5dae96c88d4b1fcea3cd1da542c02c4a8dfaa3b4541715"} pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" containerMessage="Container metrics-server failed liveness probe, will be restarted" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.470989 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" podUID="652d53d9-a4c0-4061-b817-ca5173785521" containerName="metrics-server" containerID="cri-o://075b79918bc2f91b3a5dae96c88d4b1fcea3cd1da542c02c4a8dfaa3b4541715" gracePeriod=170 Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.614189 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-xlg4t" podUID="34f53dfe-4707-4a5c-8745-c4ed944c6a6a" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.688562 4867 patch_prober.go:28] interesting pod/console-796d588566-h9wcn container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.135:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.688614 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-796d588566-h9wcn" podUID="41d35864-bb64-45f3-bc1e-a7d5440c35ad" containerName="console" probeResult="failure" output="Get \"https://10.217.0.135:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.759062 4867 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.759460 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.769930 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-gbzmm" podUID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerName="registry-server" probeResult="failure" output="command timed out" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.838825 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-w69fq" podUID="be125812-eeef-4043-bef9-fea01037dddb" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:17 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:17 crc kubenswrapper[4867]: > Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.838921 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-mrccv" podUID="e0fe6db4-add0-4993-a40c-c5b6725565fa" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:17 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:17 crc kubenswrapper[4867]: > Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.838996 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mrccv" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.839399 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-bvb8v" podUID="140d0152-99c5-425c-b956-595dea337206" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:17 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:17 crc kubenswrapper[4867]: > Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.839446 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-w69fq" podUID="be125812-eeef-4043-bef9-fea01037dddb" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:17 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:17 crc kubenswrapper[4867]: > Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.839485 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-mrccv" podUID="e0fe6db4-add0-4993-a40c-c5b6725565fa" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:17 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:17 crc kubenswrapper[4867]: > Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.839515 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/certified-operators-mrccv" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.839556 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-gbz8c" podUID="c8fe62eb-932d-4b17-8ffa-6c90780bdd74" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:17 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:17 crc kubenswrapper[4867]: > Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.839580 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-gbz8c" podUID="c8fe62eb-932d-4b17-8ffa-6c90780bdd74" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:17 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:17 crc kubenswrapper[4867]: > Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.839629 4867 patch_prober.go:28] interesting pod/monitoring-plugin-7f5858d95d-fvlxd container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.85:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.839645 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd" podUID="bcf2722f-8c1f-4061-8c4a-9888961c5361" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.85:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.839678 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.839940 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-bvb8v" podUID="140d0152-99c5-425c-b956-595dea337206" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:17 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:17 crc kubenswrapper[4867]: > Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.840847 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"5a18a56f3dda9e5462434b66a63a51cc809ec7dc9d7b1183267bce6297e94690"} pod="openshift-marketplace/certified-operators-mrccv" containerMessage="Container registry-server failed liveness probe, will be restarted" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.840896 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mrccv" podUID="e0fe6db4-add0-4993-a40c-c5b6725565fa" containerName="registry-server" containerID="cri-o://5a18a56f3dda9e5462434b66a63a51cc809ec7dc9d7b1183267bce6297e94690" gracePeriod=30 Feb 14 05:30:17 crc kubenswrapper[4867]: E0214 05:30:17.867794 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a18a56f3dda9e5462434b66a63a51cc809ec7dc9d7b1183267bce6297e94690" cmd=["grpc_health_probe","-addr=:50051"] Feb 14 05:30:17 crc kubenswrapper[4867]: E0214 05:30:17.870577 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a18a56f3dda9e5462434b66a63a51cc809ec7dc9d7b1183267bce6297e94690" cmd=["grpc_health_probe","-addr=:50051"] Feb 14 05:30:17 crc kubenswrapper[4867]: E0214 05:30:17.873935 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a18a56f3dda9e5462434b66a63a51cc809ec7dc9d7b1183267bce6297e94690" cmd=["grpc_health_probe","-addr=:50051"] Feb 14 05:30:17 crc kubenswrapper[4867]: E0214 05:30:17.873999 4867 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/certified-operators-mrccv" podUID="e0fe6db4-add0-4993-a40c-c5b6725565fa" containerName="registry-server" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.885675 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" podUID="c83fa345-043f-453c-b797-a00db3111d44" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.885777 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.966315 4867 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-wwh9m container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.68:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.966391 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" podUID="bbf9502a-06eb-4e94-911a-3a7ac1426dd8" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.68:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.972710 4867 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-wwh9m container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.68:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:17 crc kubenswrapper[4867]: I0214 05:30:17.972769 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-wwh9m" podUID="bbf9502a-06eb-4e94-911a-3a7ac1426dd8" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.68:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.233355 4867 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-p69vd container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.233796 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" podUID="553b1e39-c2d5-459d-a7fd-058f936804cb" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.233854 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.236784 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"b3ec6ea524af8ababe998d66f1ad7b4fd6c79fcd1e44d811fa653aa1b5766706"} pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.236838 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" podUID="553b1e39-c2d5-459d-a7fd-058f936804cb" containerName="authentication-operator" containerID="cri-o://b3ec6ea524af8ababe998d66f1ad7b4fd6c79fcd1e44d811fa653aa1b5766706" gracePeriod=30 Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.496234 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-nzdwg" event={"ID":"cfde5532-97c7-47b8-8b63-0159fc9e82b9","Type":"ContainerStarted","Data":"fb3865629417f734b4b087d4b7a5ea9ec4e1ff48d5844ca96c3162d51e0b069a"} Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.787864 4867 patch_prober.go:28] interesting pod/console-operator-58897d9998-htv2n container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.787915 4867 patch_prober.go:28] interesting pod/console-operator-58897d9998-htv2n container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.787947 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-htv2n" podUID="dc723269-8ee6-4236-9eaa-169a00d76442" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.787982 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-htv2n" podUID="dc723269-8ee6-4236-9eaa-169a00d76442" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.788002 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.788084 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.789585 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"0048178c63d05d01b42d22de443716f1298cccafc53f9294b614ff7f1612f71a"} pod="openshift-console-operator/console-operator-58897d9998-htv2n" containerMessage="Container console-operator failed liveness probe, will be restarted" Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.789627 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console-operator/console-operator-58897d9998-htv2n" podUID="dc723269-8ee6-4236-9eaa-169a00d76442" containerName="console-operator" containerID="cri-o://0048178c63d05d01b42d22de443716f1298cccafc53f9294b614ff7f1612f71a" gracePeriod=30 Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.840376 4867 patch_prober.go:28] interesting pod/monitoring-plugin-7f5858d95d-fvlxd container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.85:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.840431 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd" podUID="bcf2722f-8c1f-4061-8c4a-9888961c5361" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.85:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:18 crc kubenswrapper[4867]: I0214 05:30:18.927845 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" podUID="c83fa345-043f-453c-b797-a00db3111d44" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.040406 4867 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-s94ht container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.040466 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" podUID="1b196c26-84a1-408f-913b-eb50572102cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.040522 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.042872 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"c943db06330ddf72b1ccef3b0bef6de1e4225825a436a45e341b66e82e44cf32"} pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" containerMessage="Container packageserver failed liveness probe, will be restarted" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.042914 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" podUID="1b196c26-84a1-408f-913b-eb50572102cf" containerName="packageserver" containerID="cri-o://c943db06330ddf72b1ccef3b0bef6de1e4225825a436a45e341b66e82e44cf32" gracePeriod=30 Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.062668 4867 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tcss9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.062739 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" podUID="46664b60-c0df-4869-9304-cec4de385a86" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.062794 4867 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tcss9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": context deadline exceeded" start-of-body= Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.062835 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" podUID="46664b60-c0df-4869-9304-cec4de385a86" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": context deadline exceeded" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.062842 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.062957 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.063027 4867 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-s94ht container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.063230 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" podUID="1b196c26-84a1-408f-913b-eb50572102cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.063327 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.064217 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="olm-operator" containerStatusID={"Type":"cri-o","ID":"6ff2ed29a3b77b2481e62c7a269a418387c210dfacd8443a4552d6a8773dde4c"} pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" containerMessage="Container olm-operator failed liveness probe, will be restarted" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.064296 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" podUID="46664b60-c0df-4869-9304-cec4de385a86" containerName="olm-operator" containerID="cri-o://6ff2ed29a3b77b2481e62c7a269a418387c210dfacd8443a4552d6a8773dde4c" gracePeriod=30 Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.144779 4867 patch_prober.go:28] interesting pod/thanos-querier-85586fc579-b75c7 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.144863 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-85586fc579-b75c7" podUID="72801c86-0365-4e93-8887-4fdc6d8a9cad" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.82:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.234708 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.234779 4867 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dgp2v container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.234786 4867 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-rv8cb container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.234812 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" podUID="b1dba42c-e410-49fd-8c48-449fca5d65dc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.234859 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.234856 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" podUID="a0c7654d-1553-4b68-8af4-253f77d7c657" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.234780 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.234914 4867 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dgp2v container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.234974 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" podUID="b1dba42c-e410-49fd-8c48-449fca5d65dc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.235012 4867 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-rv8cb container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.235032 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" podUID="a0c7654d-1553-4b68-8af4-253f77d7c657" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.235037 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.235051 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.235017 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.235119 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.235141 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.235163 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.235174 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.242811 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"d6f9a4aceb60429befbb079eda354a35872f1921b3ba953e54763f01e9e1d148"} pod="openshift-ingress/router-default-5444994796-qlkzp" containerMessage="Container router failed liveness probe, will be restarted" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.242828 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="catalog-operator" containerStatusID={"Type":"cri-o","ID":"1c2f18b80eabbfd8f9faa98d372c322248253795be83a6d80562b3ec3e4cc570"} pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" containerMessage="Container catalog-operator failed liveness probe, will be restarted" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.242853 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="package-server-manager" containerStatusID={"Type":"cri-o","ID":"a3c4bddbff04cdcab7e0f56ecaa633a0e493e61f17878482d74e1ba56c884806"} pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" containerMessage="Container package-server-manager failed liveness probe, will be restarted" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.242871 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" containerID="cri-o://d6f9a4aceb60429befbb079eda354a35872f1921b3ba953e54763f01e9e1d148" gracePeriod=10 Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.242885 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" podUID="b1dba42c-e410-49fd-8c48-449fca5d65dc" containerName="catalog-operator" containerID="cri-o://1c2f18b80eabbfd8f9faa98d372c322248253795be83a6d80562b3ec3e4cc570" gracePeriod=30 Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.242890 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" podUID="a0c7654d-1553-4b68-8af4-253f77d7c657" containerName="package-server-manager" containerID="cri-o://a3c4bddbff04cdcab7e0f56ecaa633a0e493e61f17878482d74e1ba56c884806" gracePeriod=30 Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.372651 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="8c8003cd-8992-4714-96a2-2e649aead118" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.166:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.399070 4867 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-72mpc container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.399088 4867 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-72mpc container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.399134 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" podUID="b967a9e8-e5f1-4c92-889a-1dd6adf747fd" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.399306 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.399179 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" podUID="b967a9e8-e5f1-4c92-889a-1dd6adf747fd" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.399415 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.401096 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="prometheus-operator-admission-webhook" containerStatusID={"Type":"cri-o","ID":"1771829f5105142e5fb1906dbc8e69f1496d47af4f931c40341a4509f9eb8537"} pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" containerMessage="Container prometheus-operator-admission-webhook failed liveness probe, will be restarted" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.401139 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" podUID="b967a9e8-e5f1-4c92-889a-1dd6adf747fd" containerName="prometheus-operator-admission-webhook" containerID="cri-o://1771829f5105142e5fb1906dbc8e69f1496d47af4f931c40341a4509f9eb8537" gracePeriod=30 Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.761358 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="62ee3130-2952-453e-82b6-dba068ba1bc9" containerName="prometheus" probeResult="failure" output="command timed out" Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.789364 4867 patch_prober.go:28] interesting pod/console-operator-58897d9998-htv2n container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:19 crc kubenswrapper[4867]: I0214 05:30:19.789427 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-htv2n" podUID="dc723269-8ee6-4236-9eaa-169a00d76442" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.024371 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.064563 4867 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tcss9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.064639 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" podUID="46664b60-c0df-4869-9304-cec4de385a86" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.318676 4867 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-rv8cb container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.318741 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" podUID="a0c7654d-1553-4b68-8af4-253f77d7c657" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.318798 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.318813 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.342377 4867 patch_prober.go:28] interesting pod/controller-manager-574c444545-stzjc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.88:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.342454 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" podUID="a9fc9dc1-437a-4160-b805-fabfd7f877c2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.88:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.342393 4867 patch_prober.go:28] interesting pod/controller-manager-574c444545-stzjc container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.88:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.342578 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" podUID="a9fc9dc1-437a-4160-b805-fabfd7f877c2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.88:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.342648 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.343767 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller-manager" containerStatusID={"Type":"cri-o","ID":"8ea3d56833a0efa19ba33e28ae9cc5702afdb9a3c57db5fa754cb3ed8734293a"} pod="openshift-controller-manager/controller-manager-574c444545-stzjc" containerMessage="Container controller-manager failed liveness probe, will be restarted" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.343837 4867 patch_prober.go:28] interesting pod/route-controller-manager-7575f7b945-9zbh8 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.87:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.343926 4867 patch_prober.go:28] interesting pod/route-controller-manager-7575f7b945-9zbh8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.87:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.343931 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" podUID="a9fc9dc1-437a-4160-b805-fabfd7f877c2" containerName="controller-manager" containerID="cri-o://8ea3d56833a0efa19ba33e28ae9cc5702afdb9a3c57db5fa754cb3ed8734293a" gracePeriod=30 Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.343954 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" podUID="29172228-9eb8-461f-8f75-cdd021e0d30c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.87:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.344011 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" podUID="29172228-9eb8-461f-8f75-cdd021e0d30c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.87:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.344030 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.344856 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="route-controller-manager" containerStatusID={"Type":"cri-o","ID":"b2b4d86a5abf177e594abdba567dce9b2b749401c08580b54c991a839d54dc2c"} pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" containerMessage="Container route-controller-manager failed liveness probe, will be restarted" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.344890 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" podUID="29172228-9eb8-461f-8f75-cdd021e0d30c" containerName="route-controller-manager" containerID="cri-o://b2b4d86a5abf177e594abdba567dce9b2b749401c08580b54c991a839d54dc2c" gracePeriod=30 Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.400309 4867 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-72mpc container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.400369 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" podUID="b967a9e8-e5f1-4c92-889a-1dd6adf747fd" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.508687 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" podUID="ebee5651-7233-4c18-bb97-a4dc91eabef4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.508806 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.546420 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" event={"ID":"94ff35ef-77e1-4085-ad2f-837ebc666b2a","Type":"ContainerDied","Data":"56f2401d817967e7dfc249d99a2014932b93916388d466d645c9c4c84aa46aab"} Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.550426 4867 generic.go:334] "Generic (PLEG): container finished" podID="94ff35ef-77e1-4085-ad2f-837ebc666b2a" containerID="56f2401d817967e7dfc249d99a2014932b93916388d466d645c9c4c84aa46aab" exitCode=1 Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.552323 4867 scope.go:117] "RemoveContainer" containerID="56f2401d817967e7dfc249d99a2014932b93916388d466d645c9c4c84aa46aab" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.760040 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="b27199a8-11ac-4e59-90b8-b42387dd6dd2" containerName="galera" probeResult="failure" output="command timed out" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.760419 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.760991 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="b27199a8-11ac-4e59-90b8-b42387dd6dd2" containerName="galera" probeResult="failure" output="command timed out" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.761150 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.761960 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="62ee3130-2952-453e-82b6-dba068ba1bc9" containerName="prometheus" probeResult="failure" output="command timed out" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.763588 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"fcaa00f4074b2721a8dae207c9036fd698a9b4947b9c404b3f74667a5403e217"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.763613 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="27437fd9-2bc5-48ac-9e34-e733da15dd2b" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.763847 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.766568 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"86c896e795193cbc041ce48aa8f5cfb49ed56bfd923d3ce2eec001f309e51bd7"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.766743 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27437fd9-2bc5-48ac-9e34-e733da15dd2b" containerName="ceilometer-central-agent" containerID="cri-o://86c896e795193cbc041ce48aa8f5cfb49ed56bfd923d3ce2eec001f309e51bd7" gracePeriod=30 Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.815654 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="89e70483-d3e8-4758-bb61-ae6147dd4f39" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.9:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.815790 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="89e70483-d3e8-4758-bb61-ae6147dd4f39" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.9:8081/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.828797 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-md7ts container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.828867 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" podUID="d28844dc-6974-446b-bd9a-b22586858387" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.54:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.828938 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-md7ts container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.828975 4867 patch_prober.go:28] interesting pod/loki-operator-controller-manager-5479889c99-ltnxf container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.47:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.829001 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-md7ts" podUID="d28844dc-6974-446b-bd9a-b22586858387" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.54:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.829008 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" podUID="4a918644-d451-4f71-8a69-627b0de1ebb7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.47:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.829252 4867 patch_prober.go:28] interesting pod/loki-operator-controller-manager-5479889c99-ltnxf container/manager namespace/openshift-operators-redhat: Liveness probe status=failure output="Get \"http://10.217.0.47:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.829286 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" podUID="4a918644-d451-4f71-8a69-627b0de1ebb7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.47:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.846070 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-l82l4 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.846154 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" podUID="0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.53:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.846105 4867 patch_prober.go:28] interesting pod/logging-loki-gateway-767ffcbf75-l82l4 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:20 crc kubenswrapper[4867]: I0214 05:30:20.846294 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-767ffcbf75-l82l4" podUID="0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.53:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.324773 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" podUID="634f9e2f-2100-49e3-a31f-a369cf8ff13f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.325212 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.550732 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" podUID="ebee5651-7233-4c18-bb97-a4dc91eabef4" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.563004 4867 generic.go:334] "Generic (PLEG): container finished" podID="b1dba42c-e410-49fd-8c48-449fca5d65dc" containerID="1c2f18b80eabbfd8f9faa98d372c322248253795be83a6d80562b3ec3e4cc570" exitCode=0 Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.563083 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" event={"ID":"b1dba42c-e410-49fd-8c48-449fca5d65dc","Type":"ContainerDied","Data":"1c2f18b80eabbfd8f9faa98d372c322248253795be83a6d80562b3ec3e4cc570"} Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.571410 4867 generic.go:334] "Generic (PLEG): container finished" podID="64ff8480-2ca0-40d5-b5c9-448d0db3c575" containerID="dba0773e63253be2ecd558d953c291677c56007f46dc4d0a1851dfa825654812" exitCode=1 Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.571482 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" event={"ID":"64ff8480-2ca0-40d5-b5c9-448d0db3c575","Type":"ContainerDied","Data":"dba0773e63253be2ecd558d953c291677c56007f46dc4d0a1851dfa825654812"} Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.572814 4867 scope.go:117] "RemoveContainer" containerID="dba0773e63253be2ecd558d953c291677c56007f46dc4d0a1851dfa825654812" Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.577095 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-htv2n_dc723269-8ee6-4236-9eaa-169a00d76442/console-operator/0.log" Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.577145 4867 generic.go:334] "Generic (PLEG): container finished" podID="dc723269-8ee6-4236-9eaa-169a00d76442" containerID="0048178c63d05d01b42d22de443716f1298cccafc53f9294b614ff7f1612f71a" exitCode=1 Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.577205 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-htv2n" event={"ID":"dc723269-8ee6-4236-9eaa-169a00d76442","Type":"ContainerDied","Data":"0048178c63d05d01b42d22de443716f1298cccafc53f9294b614ff7f1612f71a"} Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.581595 4867 generic.go:334] "Generic (PLEG): container finished" podID="46664b60-c0df-4869-9304-cec4de385a86" containerID="6ff2ed29a3b77b2481e62c7a269a418387c210dfacd8443a4552d6a8773dde4c" exitCode=0 Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.581627 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" event={"ID":"46664b60-c0df-4869-9304-cec4de385a86","Type":"ContainerDied","Data":"6ff2ed29a3b77b2481e62c7a269a418387c210dfacd8443a4552d6a8773dde4c"} Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.761329 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="b27199a8-11ac-4e59-90b8-b42387dd6dd2" containerName="galera" probeResult="failure" output="command timed out" Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.792725 4867 patch_prober.go:28] interesting pod/oauth-openshift-79479887dd-9ltbt container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.75:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.792738 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" podUID="e1d5f0bd-4e8c-45c7-9d4e-c530689948ad" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.792798 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" podUID="351f0f21-497e-4c3e-99cc-30baff4e6484" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.75:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.792847 4867 patch_prober.go:28] interesting pod/oauth-openshift-79479887dd-9ltbt container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.75:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.792893 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.792918 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" podUID="351f0f21-497e-4c3e-99cc-30baff4e6484" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.75:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.793084 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.796767 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="oauth-openshift" containerStatusID={"Type":"cri-o","ID":"563d4e57c17a704703d730e549779becfa05a0901ceefc0c24faf0d612500998"} pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" containerMessage="Container oauth-openshift failed liveness probe, will be restarted" Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.910475 4867 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-jsc7b container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:21 crc kubenswrapper[4867]: I0214 05:30:21.910567 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jsc7b" podUID="d58c6e7c-e0bc-4833-ab34-348c03f75da7" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.415703 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-nzdwg" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.449656 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" podUID="d5e9c930-96ca-4a35-af4f-b8ae033469a5" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.91:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.449699 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" podUID="d5e9c930-96ca-4a35-af4f-b8ae033469a5" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.91:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.449656 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" podUID="634f9e2f-2100-49e3-a31f-a369cf8ff13f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.449775 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.449805 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.451208 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="webhook-server" containerStatusID={"Type":"cri-o","ID":"7b47d8831936f974296fa5b46313134eee7c7016a1d36736b8027bb6454a7f66"} pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" containerMessage="Container webhook-server failed liveness probe, will be restarted" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.451258 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" podUID="d5e9c930-96ca-4a35-af4f-b8ae033469a5" containerName="webhook-server" containerID="cri-o://7b47d8831936f974296fa5b46313134eee7c7016a1d36736b8027bb6454a7f66" gracePeriod=2 Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.592717 4867 generic.go:334] "Generic (PLEG): container finished" podID="553b1e39-c2d5-459d-a7fd-058f936804cb" containerID="b3ec6ea524af8ababe998d66f1ad7b4fd6c79fcd1e44d811fa653aa1b5766706" exitCode=0 Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.592781 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" event={"ID":"553b1e39-c2d5-459d-a7fd-058f936804cb","Type":"ContainerDied","Data":"b3ec6ea524af8ababe998d66f1ad7b4fd6c79fcd1e44d811fa653aa1b5766706"} Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.595014 4867 generic.go:334] "Generic (PLEG): container finished" podID="b967a9e8-e5f1-4c92-889a-1dd6adf747fd" containerID="1771829f5105142e5fb1906dbc8e69f1496d47af4f931c40341a4509f9eb8537" exitCode=0 Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.595045 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" event={"ID":"b967a9e8-e5f1-4c92-889a-1dd6adf747fd","Type":"ContainerDied","Data":"1771829f5105142e5fb1906dbc8e69f1496d47af4f931c40341a4509f9eb8537"} Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.714575 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.718754 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.758961 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="505de461-9e6f-4914-bf50-e2bf4149b566" containerName="galera" probeResult="failure" output="command timed out" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.759618 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="505de461-9e6f-4914-bf50-e2bf4149b566" containerName="galera" probeResult="failure" output="command timed out" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.793824 4867 patch_prober.go:28] interesting pod/oauth-openshift-79479887dd-9ltbt container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.75:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.793879 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" podUID="351f0f21-497e-4c3e-99cc-30baff4e6484" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.75:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.928779 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" podUID="85e0628d-4132-4c09-9da0-35db43024c9c" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.93:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.928824 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" podUID="85e0628d-4132-4c09-9da0-35db43024c9c" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.93:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.928879 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.928948 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.930182 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr-k8s-webhook-server" containerStatusID={"Type":"cri-o","ID":"e4c58a36f0ba8ec1610fa373ec1045e46fc1fd0f54e17718ead321d3a683914d"} pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" containerMessage="Container frr-k8s-webhook-server failed liveness probe, will be restarted" Feb 14 05:30:22 crc kubenswrapper[4867]: I0214 05:30:22.930231 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" podUID="85e0628d-4132-4c09-9da0-35db43024c9c" containerName="frr-k8s-webhook-server" containerID="cri-o://e4c58a36f0ba8ec1610fa373ec1045e46fc1fd0f54e17718ead321d3a683914d" gracePeriod=10 Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.330270 4867 trace.go:236] Trace[1563430983]: "Calculate volume metrics of glance for pod openstack/glance-default-external-api-0" (14-Feb-2026 05:30:21.071) (total time: 2241ms): Feb 14 05:30:23 crc kubenswrapper[4867]: Trace[1563430983]: [2.241733536s] [2.241733536s] END Feb 14 05:30:23 crc kubenswrapper[4867]: E0214 05:30:23.454002 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a18a56f3dda9e5462434b66a63a51cc809ec7dc9d7b1183267bce6297e94690" cmd=["grpc_health_probe","-addr=:50051"] Feb 14 05:30:23 crc kubenswrapper[4867]: E0214 05:30:23.457218 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a18a56f3dda9e5462434b66a63a51cc809ec7dc9d7b1183267bce6297e94690" cmd=["grpc_health_probe","-addr=:50051"] Feb 14 05:30:23 crc kubenswrapper[4867]: E0214 05:30:23.460588 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5a18a56f3dda9e5462434b66a63a51cc809ec7dc9d7b1183267bce6297e94690" cmd=["grpc_health_probe","-addr=:50051"] Feb 14 05:30:23 crc kubenswrapper[4867]: E0214 05:30:23.460656 4867 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/certified-operators-mrccv" podUID="e0fe6db4-add0-4993-a40c-c5b6725565fa" containerName="registry-server" Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.518321 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jj9q" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:23 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:23 crc kubenswrapper[4867]: > Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.522766 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-gbzmm" podUID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:23 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:23 crc kubenswrapper[4867]: > Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.526987 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-n4l4x" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:23 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:23 crc kubenswrapper[4867]: > Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.531154 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fwfld" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:23 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:23 crc kubenswrapper[4867]: > Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.539726 4867 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-nzdwg" podUID="cfde5532-97c7-47b8-8b63-0159fc9e82b9" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.539722 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-nzdwg" podUID="cfde5532-97c7-47b8-8b63-0159fc9e82b9" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.539842 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-nzdwg" podUID="cfde5532-97c7-47b8-8b63-0159fc9e82b9" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.606637 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" event={"ID":"94ff35ef-77e1-4085-ad2f-837ebc666b2a","Type":"ContainerStarted","Data":"fe7e9873ab36c7f8d1e55938a1671ab6f035ea944cbd539e11e2ab7ea37bf6d5"} Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.606884 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.609034 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-htv2n_dc723269-8ee6-4236-9eaa-169a00d76442/console-operator/0.log" Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.609201 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-htv2n" event={"ID":"dc723269-8ee6-4236-9eaa-169a00d76442","Type":"ContainerStarted","Data":"9134060b76bd36568c962f17b9fb144f5365dea8e3056127b8a490f076986c9c"} Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.609337 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.609873 4867 patch_prober.go:28] interesting pod/console-operator-58897d9998-htv2n container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.609905 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-htv2n" podUID="dc723269-8ee6-4236-9eaa-169a00d76442" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.614563 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" event={"ID":"b1dba42c-e410-49fd-8c48-449fca5d65dc","Type":"ContainerStarted","Data":"ac0bf9407908a49c2fd7cb80c9c437229e335f1a3c5baa1bfaeee6f27fce2d00"} Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.615809 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.616373 4867 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dgp2v container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.616412 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" podUID="b1dba42c-e410-49fd-8c48-449fca5d65dc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.618452 4867 patch_prober.go:28] interesting pod/etcd-crc container/etcd namespace/openshift-etcd: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=failed to establish etcd client: giving up getting a cached client after 3 tries Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.618490 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-crc" podUID="2139d3e2895fc6797b9c76a1b4c9886d" containerName="etcd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.618682 4867 patch_prober.go:28] interesting pod/etcd-crc container/etcd namespace/openshift-etcd: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=failed to establish etcd client: giving up getting a cached client after 3 tries Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.618693 4867 generic.go:334] "Generic (PLEG): container finished" podID="4a918644-d451-4f71-8a69-627b0de1ebb7" containerID="45aa757658fb299c4e4089cef9945c1427c62ec817c7670b4ba12f2330eb044e" exitCode=1 Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.618734 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd/etcd-crc" podUID="2139d3e2895fc6797b9c76a1b4c9886d" containerName="etcd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.618776 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" event={"ID":"4a918644-d451-4f71-8a69-627b0de1ebb7","Type":"ContainerDied","Data":"45aa757658fb299c4e4089cef9945c1427c62ec817c7670b4ba12f2330eb044e"} Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.621054 4867 generic.go:334] "Generic (PLEG): container finished" podID="29172228-9eb8-461f-8f75-cdd021e0d30c" containerID="b2b4d86a5abf177e594abdba567dce9b2b749401c08580b54c991a839d54dc2c" exitCode=0 Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.621119 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" event={"ID":"29172228-9eb8-461f-8f75-cdd021e0d30c","Type":"ContainerDied","Data":"b2b4d86a5abf177e594abdba567dce9b2b749401c08580b54c991a839d54dc2c"} Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.623321 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" event={"ID":"64ff8480-2ca0-40d5-b5c9-448d0db3c575","Type":"ContainerStarted","Data":"1f5a72a1daf050366de810bc1aa6558f7631e2545468e80dc7bcb0232f9f5e4d"} Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.623571 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.625262 4867 generic.go:334] "Generic (PLEG): container finished" podID="a9fc9dc1-437a-4160-b805-fabfd7f877c2" containerID="8ea3d56833a0efa19ba33e28ae9cc5702afdb9a3c57db5fa754cb3ed8734293a" exitCode=0 Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.625299 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" event={"ID":"a9fc9dc1-437a-4160-b805-fabfd7f877c2","Type":"ContainerDied","Data":"8ea3d56833a0efa19ba33e28ae9cc5702afdb9a3c57db5fa754cb3ed8734293a"} Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.625786 4867 scope.go:117] "RemoveContainer" containerID="45aa757658fb299c4e4089cef9945c1427c62ec817c7670b4ba12f2330eb044e" Feb 14 05:30:23 crc kubenswrapper[4867]: I0214 05:30:23.971741 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" podUID="85e0628d-4132-4c09-9da0-35db43024c9c" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.93:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.030864 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-x7qx5" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.407744 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8" podUID="10461723-ecff-48fe-a034-9a07bf3bf8f7" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.98:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.408095 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.419351 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-6b9546c8f4-49lm8" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.555662 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-4hvw7" podUID="6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.555733 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/speaker-4hvw7" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.556003 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-4hvw7" podUID="6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.556110 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-4hvw7" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.556786 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="speaker" containerStatusID={"Type":"cri-o","ID":"1c50e8be32836da6fce22b59341f0df53ed1589043997f275a93de461dc1feea"} pod="metallb-system/speaker-4hvw7" containerMessage="Container speaker failed liveness probe, will be restarted" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.556836 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/speaker-4hvw7" podUID="6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8" containerName="speaker" containerID="cri-o://1c50e8be32836da6fce22b59341f0df53ed1589043997f275a93de461dc1feea" gracePeriod=2 Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.595846 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd"] Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.599899 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.624324 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.634772 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.657764 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" event={"ID":"29172228-9eb8-461f-8f75-cdd021e0d30c","Type":"ContainerStarted","Data":"5256716fb99e6b9c6c166c6a352357713533194081156a34479ed30354c65c2c"} Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.658145 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.658479 4867 patch_prober.go:28] interesting pod/route-controller-manager-7575f7b945-9zbh8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.87:8443/healthz\": dial tcp 10.217.0.87:8443: connect: connection refused" start-of-body= Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.658620 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" podUID="29172228-9eb8-461f-8f75-cdd021e0d30c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.87:8443/healthz\": dial tcp 10.217.0.87:8443: connect: connection refused" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.661149 4867 generic.go:334] "Generic (PLEG): container finished" podID="1b196c26-84a1-408f-913b-eb50572102cf" containerID="c943db06330ddf72b1ccef3b0bef6de1e4225825a436a45e341b66e82e44cf32" exitCode=0 Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.661213 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" event={"ID":"1b196c26-84a1-408f-913b-eb50572102cf","Type":"ContainerDied","Data":"c943db06330ddf72b1ccef3b0bef6de1e4225825a436a45e341b66e82e44cf32"} Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.663174 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-p69vd" event={"ID":"553b1e39-c2d5-459d-a7fd-058f936804cb","Type":"ContainerStarted","Data":"648ace95ef188599adcebc066729e1605bfdd3d635297064138d6abe64b4b847"} Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.665717 4867 generic.go:334] "Generic (PLEG): container finished" podID="d5e9c930-96ca-4a35-af4f-b8ae033469a5" containerID="7b47d8831936f974296fa5b46313134eee7c7016a1d36736b8027bb6454a7f66" exitCode=0 Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.665777 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" event={"ID":"d5e9c930-96ca-4a35-af4f-b8ae033469a5","Type":"ContainerDied","Data":"7b47d8831936f974296fa5b46313134eee7c7016a1d36736b8027bb6454a7f66"} Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.668665 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" event={"ID":"46664b60-c0df-4869-9304-cec4de385a86","Type":"ContainerStarted","Data":"3ea8f9e51f3c690e4d7e7df0149e187df6541e37b12dcea391f106c1a4377dc2"} Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.670006 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.670090 4867 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tcss9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.670141 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" podUID="46664b60-c0df-4869-9304-cec4de385a86" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.673823 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" event={"ID":"4a918644-d451-4f71-8a69-627b0de1ebb7","Type":"ContainerStarted","Data":"3e432b7bd7e7479ef22fb4a1f58571fc980580d6853a79877068a64f678ca70f"} Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.674045 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.676763 4867 generic.go:334] "Generic (PLEG): container finished" podID="e1d5f0bd-4e8c-45c7-9d4e-c530689948ad" containerID="4de37120723c6ceb858cc27ed5593f4b0f873f34286ef080ea925db6e29ad027" exitCode=1 Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.676830 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" event={"ID":"e1d5f0bd-4e8c-45c7-9d4e-c530689948ad","Type":"ContainerDied","Data":"4de37120723c6ceb858cc27ed5593f4b0f873f34286ef080ea925db6e29ad027"} Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.678430 4867 scope.go:117] "RemoveContainer" containerID="4de37120723c6ceb858cc27ed5593f4b0f873f34286ef080ea925db6e29ad027" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.679255 4867 generic.go:334] "Generic (PLEG): container finished" podID="85e0628d-4132-4c09-9da0-35db43024c9c" containerID="e4c58a36f0ba8ec1610fa373ec1045e46fc1fd0f54e17718ead321d3a683914d" exitCode=0 Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.679295 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" event={"ID":"85e0628d-4132-4c09-9da0-35db43024c9c","Type":"ContainerDied","Data":"e4c58a36f0ba8ec1610fa373ec1045e46fc1fd0f54e17718ead321d3a683914d"} Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.683292 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" event={"ID":"b967a9e8-e5f1-4c92-889a-1dd6adf747fd","Type":"ContainerStarted","Data":"6ad135b222f9c6b4f7d1f78014739538c53ab615d351d5d0da90a6bfb8609f53"} Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.683693 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.683766 4867 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-72mpc container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" start-of-body= Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.683805 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" podUID="b967a9e8-e5f1-4c92-889a-1dd6adf747fd" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.687929 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" event={"ID":"a9fc9dc1-437a-4160-b805-fabfd7f877c2","Type":"ContainerStarted","Data":"795af41e3a2def91739801d0722202b0215cc42eff67dec20742a6cb0eae5da3"} Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.687963 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.688712 4867 patch_prober.go:28] interesting pod/controller-manager-574c444545-stzjc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.88:8443/healthz\": dial tcp 10.217.0.88:8443: connect: connection refused" start-of-body= Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.688914 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" podUID="a9fc9dc1-437a-4160-b805-fabfd7f877c2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.88:8443/healthz\": dial tcp 10.217.0.88:8443: connect: connection refused" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.690549 4867 generic.go:334] "Generic (PLEG): container finished" podID="a0c7654d-1553-4b68-8af4-253f77d7c657" containerID="a3c4bddbff04cdcab7e0f56ecaa633a0e493e61f17878482d74e1ba56c884806" exitCode=0 Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.692178 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" event={"ID":"a0c7654d-1553-4b68-8af4-253f77d7c657","Type":"ContainerDied","Data":"a3c4bddbff04cdcab7e0f56ecaa633a0e493e61f17878482d74e1ba56c884806"} Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.692237 4867 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dgp2v container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.692390 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" podUID="b1dba42c-e410-49fd-8c48-449fca5d65dc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.693367 4867 patch_prober.go:28] interesting pod/console-operator-58897d9998-htv2n container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.693526 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-htv2n" podUID="dc723269-8ee6-4236-9eaa-169a00d76442" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.796247 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="27437fd9-2bc5-48ac-9e34-e733da15dd2b" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.802782 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/473b9472-6542-4e27-87e9-17365cd400e1-config-volume\") pod \"collect-profiles-29517450-67scd\" (UID: \"473b9472-6542-4e27-87e9-17365cd400e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.803169 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/473b9472-6542-4e27-87e9-17365cd400e1-secret-volume\") pod \"collect-profiles-29517450-67scd\" (UID: \"473b9472-6542-4e27-87e9-17365cd400e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.805356 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddk97\" (UniqueName: \"kubernetes.io/projected/473b9472-6542-4e27-87e9-17365cd400e1-kube-api-access-ddk97\") pod \"collect-profiles-29517450-67scd\" (UID: \"473b9472-6542-4e27-87e9-17365cd400e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.913320 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddk97\" (UniqueName: \"kubernetes.io/projected/473b9472-6542-4e27-87e9-17365cd400e1-kube-api-access-ddk97\") pod \"collect-profiles-29517450-67scd\" (UID: \"473b9472-6542-4e27-87e9-17365cd400e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.913593 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/473b9472-6542-4e27-87e9-17365cd400e1-config-volume\") pod \"collect-profiles-29517450-67scd\" (UID: \"473b9472-6542-4e27-87e9-17365cd400e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.913749 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/473b9472-6542-4e27-87e9-17365cd400e1-secret-volume\") pod \"collect-profiles-29517450-67scd\" (UID: \"473b9472-6542-4e27-87e9-17365cd400e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" Feb 14 05:30:24 crc kubenswrapper[4867]: I0214 05:30:24.927426 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/473b9472-6542-4e27-87e9-17365cd400e1-config-volume\") pod \"collect-profiles-29517450-67scd\" (UID: \"473b9472-6542-4e27-87e9-17365cd400e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.038524 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-29mb7" podUID="b4bb205c-0469-49a0-b783-9b51ae11ddfe" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:25 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:25 crc kubenswrapper[4867]: > Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.050989 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-29mb7" podUID="b4bb205c-0469-49a0-b783-9b51ae11ddfe" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:25 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:25 crc kubenswrapper[4867]: > Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.079949 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-29mb7" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.079980 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-index-29mb7" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.086115 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"56b5a70b5aa1a66aaa851499b6c31a6255ba3615b98722b19c9dce1fa934e34b"} pod="openstack-operators/openstack-operator-index-29mb7" containerMessage="Container registry-server failed liveness probe, will be restarted" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.086190 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-29mb7" podUID="b4bb205c-0469-49a0-b783-9b51ae11ddfe" containerName="registry-server" containerID="cri-o://56b5a70b5aa1a66aaa851499b6c31a6255ba3615b98722b19c9dce1fa934e34b" gracePeriod=30 Feb 14 05:30:25 crc kubenswrapper[4867]: E0214 05:30:25.098789 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="56b5a70b5aa1a66aaa851499b6c31a6255ba3615b98722b19c9dce1fa934e34b" cmd=["grpc_health_probe","-addr=:50051"] Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.102193 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddk97\" (UniqueName: \"kubernetes.io/projected/473b9472-6542-4e27-87e9-17365cd400e1-kube-api-access-ddk97\") pod \"collect-profiles-29517450-67scd\" (UID: \"473b9472-6542-4e27-87e9-17365cd400e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" Feb 14 05:30:25 crc kubenswrapper[4867]: E0214 05:30:25.102845 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="56b5a70b5aa1a66aaa851499b6c31a6255ba3615b98722b19c9dce1fa934e34b" cmd=["grpc_health_probe","-addr=:50051"] Feb 14 05:30:25 crc kubenswrapper[4867]: E0214 05:30:25.107318 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="56b5a70b5aa1a66aaa851499b6c31a6255ba3615b98722b19c9dce1fa934e34b" cmd=["grpc_health_probe","-addr=:50051"] Feb 14 05:30:25 crc kubenswrapper[4867]: E0214 05:30:25.107366 4867 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack-operators/openstack-operator-index-29mb7" podUID="b4bb205c-0469-49a0-b783-9b51ae11ddfe" containerName="registry-server" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.121134 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/473b9472-6542-4e27-87e9-17365cd400e1-secret-volume\") pod \"collect-profiles-29517450-67scd\" (UID: \"473b9472-6542-4e27-87e9-17365cd400e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.260865 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.555581 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd"] Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.715193 4867 generic.go:334] "Generic (PLEG): container finished" podID="b4bb205c-0469-49a0-b783-9b51ae11ddfe" containerID="56b5a70b5aa1a66aaa851499b6c31a6255ba3615b98722b19c9dce1fa934e34b" exitCode=0 Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.715564 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-29mb7" event={"ID":"b4bb205c-0469-49a0-b783-9b51ae11ddfe","Type":"ContainerDied","Data":"56b5a70b5aa1a66aaa851499b6c31a6255ba3615b98722b19c9dce1fa934e34b"} Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.733807 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" event={"ID":"e1d5f0bd-4e8c-45c7-9d4e-c530689948ad","Type":"ContainerStarted","Data":"7c0a0f796434121a6b451116b6114beebec2415659a834ee519edae8f84bc637"} Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.734003 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-796d588566-h9wcn" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.734048 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.739550 4867 generic.go:334] "Generic (PLEG): container finished" podID="c38fa6a1-63b1-44a2-82b8-d6fd3d8a1f8d" containerID="0f79bed42d7427fc6fb8fd280b968295c72ddab44991fb6bd63a312b21582ecc" exitCode=1 Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.739639 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-87pdl" event={"ID":"c38fa6a1-63b1-44a2-82b8-d6fd3d8a1f8d","Type":"ContainerDied","Data":"0f79bed42d7427fc6fb8fd280b968295c72ddab44991fb6bd63a312b21582ecc"} Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.750706 4867 scope.go:117] "RemoveContainer" containerID="0f79bed42d7427fc6fb8fd280b968295c72ddab44991fb6bd63a312b21582ecc" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.753206 4867 generic.go:334] "Generic (PLEG): container finished" podID="6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8" containerID="1c50e8be32836da6fce22b59341f0df53ed1589043997f275a93de461dc1feea" exitCode=0 Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.753269 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4hvw7" event={"ID":"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8","Type":"ContainerDied","Data":"1c50e8be32836da6fce22b59341f0df53ed1589043997f275a93de461dc1feea"} Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.758796 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" event={"ID":"d5e9c930-96ca-4a35-af4f-b8ae033469a5","Type":"ContainerStarted","Data":"19e39365907f39db0aefd7f0404c6815634871f70efdbfbc4ee845e439bb7415"} Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.759283 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.759812 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.766237 4867 generic.go:334] "Generic (PLEG): container finished" podID="634f9e2f-2100-49e3-a31f-a369cf8ff13f" containerID="403136f34a075ecd6d7c5c8a094d619a3f5e7e071fa96a3e6040cda845a2f86f" exitCode=1 Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.766335 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" event={"ID":"634f9e2f-2100-49e3-a31f-a369cf8ff13f","Type":"ContainerDied","Data":"403136f34a075ecd6d7c5c8a094d619a3f5e7e071fa96a3e6040cda845a2f86f"} Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.767412 4867 scope.go:117] "RemoveContainer" containerID="403136f34a075ecd6d7c5c8a094d619a3f5e7e071fa96a3e6040cda845a2f86f" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.821386 4867 generic.go:334] "Generic (PLEG): container finished" podID="e0fe6db4-add0-4993-a40c-c5b6725565fa" containerID="5a18a56f3dda9e5462434b66a63a51cc809ec7dc9d7b1183267bce6297e94690" exitCode=0 Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.823272 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrccv" event={"ID":"e0fe6db4-add0-4993-a40c-c5b6725565fa","Type":"ContainerDied","Data":"5a18a56f3dda9e5462434b66a63a51cc809ec7dc9d7b1183267bce6297e94690"} Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.824168 4867 patch_prober.go:28] interesting pod/route-controller-manager-7575f7b945-9zbh8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.87:8443/healthz\": dial tcp 10.217.0.87:8443: connect: connection refused" start-of-body= Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.824214 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" podUID="29172228-9eb8-461f-8f75-cdd021e0d30c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.87:8443/healthz\": dial tcp 10.217.0.87:8443: connect: connection refused" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.824289 4867 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tcss9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.824306 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" podUID="46664b60-c0df-4869-9304-cec4de385a86" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.824468 4867 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dgp2v container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.831324 4867 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-72mpc container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" start-of-body= Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.831365 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" podUID="b967a9e8-e5f1-4c92-889a-1dd6adf747fd" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.831438 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" podUID="b1dba42c-e410-49fd-8c48-449fca5d65dc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.831953 4867 patch_prober.go:28] interesting pod/controller-manager-574c444545-stzjc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.88:8443/healthz\": dial tcp 10.217.0.88:8443: connect: connection refused" start-of-body= Feb 14 05:30:25 crc kubenswrapper[4867]: I0214 05:30:25.831980 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" podUID="a9fc9dc1-437a-4160-b805-fabfd7f877c2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.88:8443/healthz\": dial tcp 10.217.0.88:8443: connect: connection refused" Feb 14 05:30:26 crc kubenswrapper[4867]: I0214 05:30:26.266873 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="b27199a8-11ac-4e59-90b8-b42387dd6dd2" containerName="galera" containerID="cri-o://fcaa00f4074b2721a8dae207c9036fd698a9b4947b9c404b3f74667a5403e217" gracePeriod=25 Feb 14 05:30:26 crc kubenswrapper[4867]: I0214 05:30:26.848953 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" event={"ID":"634f9e2f-2100-49e3-a31f-a369cf8ff13f","Type":"ContainerStarted","Data":"4389fd5035a82f3c51a86d0103019ee8c507417a4882c3decf546c05a63b7fb0"} Feb 14 05:30:26 crc kubenswrapper[4867]: I0214 05:30:26.851368 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 05:30:26 crc kubenswrapper[4867]: I0214 05:30:26.853751 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-75585db5cc-kzk25" Feb 14 05:30:26 crc kubenswrapper[4867]: I0214 05:30:26.861546 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" event={"ID":"a0c7654d-1553-4b68-8af4-253f77d7c657","Type":"ContainerStarted","Data":"76bdc3a6742cdd5fb37605a49dd459333a78bcbc1eeb32e12badbc6d5d8cde36"} Feb 14 05:30:26 crc kubenswrapper[4867]: I0214 05:30:26.862739 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" Feb 14 05:30:26 crc kubenswrapper[4867]: I0214 05:30:26.865283 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" event={"ID":"1b196c26-84a1-408f-913b-eb50572102cf","Type":"ContainerStarted","Data":"96f714c002693e445ae683c2076037bd2aff1426418df2b693bdfa14640e4b82"} Feb 14 05:30:26 crc kubenswrapper[4867]: I0214 05:30:26.866159 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 05:30:26 crc kubenswrapper[4867]: I0214 05:30:26.866241 4867 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-s94ht container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" start-of-body= Feb 14 05:30:26 crc kubenswrapper[4867]: I0214 05:30:26.866269 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" podUID="1b196c26-84a1-408f-913b-eb50572102cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" Feb 14 05:30:26 crc kubenswrapper[4867]: I0214 05:30:26.868408 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" event={"ID":"85e0628d-4132-4c09-9da0-35db43024c9c","Type":"ContainerStarted","Data":"93c21c64b23ef48f6f85f2357742df956c068366b556eb0d6321f48f119996e8"} Feb 14 05:30:26 crc kubenswrapper[4867]: I0214 05:30:26.869012 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" Feb 14 05:30:26 crc kubenswrapper[4867]: I0214 05:30:26.873369 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-87pdl" event={"ID":"c38fa6a1-63b1-44a2-82b8-d6fd3d8a1f8d","Type":"ContainerStarted","Data":"c86095bf55dbd005bfba9ff7baa5168e85bdb646b4e05c4676ed04e13f016c6d"} Feb 14 05:30:27 crc kubenswrapper[4867]: I0214 05:30:27.022209 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="38c903d9-50f6-418b-84d5-7ee82e9d1e2f" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 05:30:27 crc kubenswrapper[4867]: I0214 05:30:27.058555 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-7f5858d95d-fvlxd" Feb 14 05:30:27 crc kubenswrapper[4867]: E0214 05:30:27.333564 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27437fd9_2bc5_48ac_9e34_e733da15dd2b.slice/crio-86c896e795193cbc041ce48aa8f5cfb49ed56bfd923d3ce2eec001f309e51bd7.scope\": RecentStats: unable to find data in memory cache]" Feb 14 05:30:27 crc kubenswrapper[4867]: I0214 05:30:27.516444 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-nzdwg" Feb 14 05:30:27 crc kubenswrapper[4867]: I0214 05:30:27.787368 4867 patch_prober.go:28] interesting pod/console-operator-58897d9998-htv2n container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 14 05:30:27 crc kubenswrapper[4867]: I0214 05:30:27.787420 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-htv2n" podUID="dc723269-8ee6-4236-9eaa-169a00d76442" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 14 05:30:27 crc kubenswrapper[4867]: I0214 05:30:27.787425 4867 patch_prober.go:28] interesting pod/console-operator-58897d9998-htv2n container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 14 05:30:27 crc kubenswrapper[4867]: I0214 05:30:27.787480 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-htv2n" podUID="dc723269-8ee6-4236-9eaa-169a00d76442" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 14 05:30:27 crc kubenswrapper[4867]: I0214 05:30:27.895286 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mrccv" event={"ID":"e0fe6db4-add0-4993-a40c-c5b6725565fa","Type":"ContainerStarted","Data":"67bec8c1a78964f3af1c6beb53b597e598e64e7e5ded1183b3aeb8057ed46b8a"} Feb 14 05:30:27 crc kubenswrapper[4867]: I0214 05:30:27.899083 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-29mb7" event={"ID":"b4bb205c-0469-49a0-b783-9b51ae11ddfe","Type":"ContainerStarted","Data":"85318066019cefb00a675f400eaf63d7a35438da88d78e8a7709d1024bb99115"} Feb 14 05:30:27 crc kubenswrapper[4867]: I0214 05:30:27.904265 4867 generic.go:334] "Generic (PLEG): container finished" podID="27437fd9-2bc5-48ac-9e34-e733da15dd2b" containerID="86c896e795193cbc041ce48aa8f5cfb49ed56bfd923d3ce2eec001f309e51bd7" exitCode=0 Feb 14 05:30:27 crc kubenswrapper[4867]: I0214 05:30:27.904654 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27437fd9-2bc5-48ac-9e34-e733da15dd2b","Type":"ContainerDied","Data":"86c896e795193cbc041ce48aa8f5cfb49ed56bfd923d3ce2eec001f309e51bd7"} Feb 14 05:30:27 crc kubenswrapper[4867]: I0214 05:30:27.905328 4867 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-s94ht container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" start-of-body= Feb 14 05:30:27 crc kubenswrapper[4867]: I0214 05:30:27.905381 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" podUID="1b196c26-84a1-408f-913b-eb50572102cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.037681 4867 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-s94ht container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" start-of-body= Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.037737 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" podUID="1b196c26-84a1-408f-913b-eb50572102cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.037682 4867 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-s94ht container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" start-of-body= Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.037803 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" podUID="1b196c26-84a1-408f-913b-eb50572102cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.044896 4867 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tcss9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.044969 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" podUID="46664b60-c0df-4869-9304-cec4de385a86" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.045008 4867 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tcss9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.045084 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" podUID="46664b60-c0df-4869-9304-cec4de385a86" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.126753 4867 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dgp2v container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.127285 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" podUID="b1dba42c-e410-49fd-8c48-449fca5d65dc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.126757 4867 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dgp2v container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.127369 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" podUID="b1dba42c-e410-49fd-8c48-449fca5d65dc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.220752 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]backend-http ok Feb 14 05:30:28 crc kubenswrapper[4867]: [+]has-synced ok Feb 14 05:30:28 crc kubenswrapper[4867]: [-]process-running failed: reason withheld Feb 14 05:30:28 crc kubenswrapper[4867]: healthz check failed Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.220814 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.398533 4867 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-72mpc container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" start-of-body= Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.398644 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" podUID="b967a9e8-e5f1-4c92-889a-1dd6adf747fd" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.398592 4867 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-72mpc container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" start-of-body= Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.398766 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" podUID="b967a9e8-e5f1-4c92-889a-1dd6adf747fd" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.927180 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27437fd9-2bc5-48ac-9e34-e733da15dd2b","Type":"ContainerStarted","Data":"b41170ee2bb16f2e334839addb6382f3dd37db9fe4c0c536cea87f10a0681b84"} Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.930517 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4hvw7" event={"ID":"6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8","Type":"ContainerStarted","Data":"63dd68177499d45fbfb9999ed189a1e4fa94afccb38254448208d9f63c6805de"} Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.931048 4867 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-s94ht container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" start-of-body= Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.931082 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" podUID="1b196c26-84a1-408f-913b-eb50572102cf" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" Feb 14 05:30:28 crc kubenswrapper[4867]: I0214 05:30:28.931965 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-4hvw7" podUID="6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Feb 14 05:30:29 crc kubenswrapper[4867]: I0214 05:30:29.340784 4867 patch_prober.go:28] interesting pod/controller-manager-574c444545-stzjc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.88:8443/healthz\": dial tcp 10.217.0.88:8443: connect: connection refused" start-of-body= Feb 14 05:30:29 crc kubenswrapper[4867]: I0214 05:30:29.341189 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" podUID="a9fc9dc1-437a-4160-b805-fabfd7f877c2" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.88:8443/healthz\": dial tcp 10.217.0.88:8443: connect: connection refused" Feb 14 05:30:29 crc kubenswrapper[4867]: I0214 05:30:29.343947 4867 patch_prober.go:28] interesting pod/route-controller-manager-7575f7b945-9zbh8 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.87:8443/healthz\": dial tcp 10.217.0.87:8443: connect: connection refused" start-of-body= Feb 14 05:30:29 crc kubenswrapper[4867]: I0214 05:30:29.344002 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" podUID="29172228-9eb8-461f-8f75-cdd021e0d30c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.87:8443/healthz\": dial tcp 10.217.0.87:8443: connect: connection refused" Feb 14 05:30:29 crc kubenswrapper[4867]: I0214 05:30:29.345215 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="38c903d9-50f6-418b-84d5-7ee82e9d1e2f" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 05:30:29 crc kubenswrapper[4867]: I0214 05:30:29.531086 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-jqq2w" Feb 14 05:30:29 crc kubenswrapper[4867]: E0214 05:30:29.646646 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fcaa00f4074b2721a8dae207c9036fd698a9b4947b9c404b3f74667a5403e217" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 14 05:30:29 crc kubenswrapper[4867]: E0214 05:30:29.651646 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fcaa00f4074b2721a8dae207c9036fd698a9b4947b9c404b3f74667a5403e217" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 14 05:30:29 crc kubenswrapper[4867]: E0214 05:30:29.655534 4867 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fcaa00f4074b2721a8dae207c9036fd698a9b4947b9c404b3f74667a5403e217" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Feb 14 05:30:29 crc kubenswrapper[4867]: E0214 05:30:29.655644 4867 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="b27199a8-11ac-4e59-90b8-b42387dd6dd2" containerName="galera" Feb 14 05:30:29 crc kubenswrapper[4867]: I0214 05:30:29.745551 4867 patch_prober.go:28] interesting pod/loki-operator-controller-manager-5479889c99-ltnxf container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.47:8081/readyz\": dial tcp 10.217.0.47:8081: connect: connection refused" start-of-body= Feb 14 05:30:29 crc kubenswrapper[4867]: I0214 05:30:29.745597 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" podUID="4a918644-d451-4f71-8a69-627b0de1ebb7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.47:8081/readyz\": dial tcp 10.217.0.47:8081: connect: connection refused" Feb 14 05:30:29 crc kubenswrapper[4867]: I0214 05:30:29.948377 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-qlkzp_4b71d414-e6bf-4f51-a808-1938c1edf207/router/0.log" Feb 14 05:30:29 crc kubenswrapper[4867]: I0214 05:30:29.948743 4867 generic.go:334] "Generic (PLEG): container finished" podID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerID="d6f9a4aceb60429befbb079eda354a35872f1921b3ba953e54763f01e9e1d148" exitCode=137 Feb 14 05:30:29 crc kubenswrapper[4867]: I0214 05:30:29.949908 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qlkzp" event={"ID":"4b71d414-e6bf-4f51-a808-1938c1edf207","Type":"ContainerDied","Data":"d6f9a4aceb60429befbb079eda354a35872f1921b3ba953e54763f01e9e1d148"} Feb 14 05:30:29 crc kubenswrapper[4867]: I0214 05:30:29.949944 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-4hvw7" Feb 14 05:30:30 crc kubenswrapper[4867]: I0214 05:30:30.848039 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 05:30:30 crc kubenswrapper[4867]: I0214 05:30:30.982484 4867 generic.go:334] "Generic (PLEG): container finished" podID="b27199a8-11ac-4e59-90b8-b42387dd6dd2" containerID="fcaa00f4074b2721a8dae207c9036fd698a9b4947b9c404b3f74667a5403e217" exitCode=0 Feb 14 05:30:30 crc kubenswrapper[4867]: I0214 05:30:30.982595 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b27199a8-11ac-4e59-90b8-b42387dd6dd2","Type":"ContainerDied","Data":"fcaa00f4074b2721a8dae207c9036fd698a9b4947b9c404b3f74667a5403e217"} Feb 14 05:30:30 crc kubenswrapper[4867]: I0214 05:30:30.989005 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-29mb7" Feb 14 05:30:30 crc kubenswrapper[4867]: I0214 05:30:30.989314 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-29mb7" Feb 14 05:30:31 crc kubenswrapper[4867]: I0214 05:30:31.003950 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-qlkzp_4b71d414-e6bf-4f51-a808-1938c1edf207/router/0.log" Feb 14 05:30:31 crc kubenswrapper[4867]: I0214 05:30:31.075195 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qlkzp" event={"ID":"4b71d414-e6bf-4f51-a808-1938c1edf207","Type":"ContainerStarted","Data":"44f0f426b9ce03e78b4461340baf65994577935885180313c500722c000c86c5"} Feb 14 05:30:31 crc kubenswrapper[4867]: I0214 05:30:31.104995 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 05:30:31 crc kubenswrapper[4867]: I0214 05:30:31.107752 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 14 05:30:31 crc kubenswrapper[4867]: I0214 05:30:31.107805 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 14 05:30:31 crc kubenswrapper[4867]: I0214 05:30:31.253370 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:30:31 crc kubenswrapper[4867]: I0214 05:30:31.253427 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:30:31 crc kubenswrapper[4867]: I0214 05:30:31.350420 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fwfld" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:31 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:31 crc kubenswrapper[4867]: > Feb 14 05:30:31 crc kubenswrapper[4867]: I0214 05:30:31.357234 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jj9q" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:31 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:31 crc kubenswrapper[4867]: > Feb 14 05:30:31 crc kubenswrapper[4867]: I0214 05:30:31.869593 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-29mb7" Feb 14 05:30:31 crc kubenswrapper[4867]: I0214 05:30:31.999337 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd"] Feb 14 05:30:32 crc kubenswrapper[4867]: I0214 05:30:32.056771 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b27199a8-11ac-4e59-90b8-b42387dd6dd2","Type":"ContainerStarted","Data":"9ba3b8e288c1810798f2349fac6c2540acaad348ddc3c638e43fd430ab504089"} Feb 14 05:30:32 crc kubenswrapper[4867]: I0214 05:30:32.120808 4867 patch_prober.go:28] interesting pod/router-default-5444994796-qlkzp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 14 05:30:32 crc kubenswrapper[4867]: [-]has-synced failed: reason withheld Feb 14 05:30:32 crc kubenswrapper[4867]: [+]process-running ok Feb 14 05:30:32 crc kubenswrapper[4867]: healthz check failed Feb 14 05:30:32 crc kubenswrapper[4867]: I0214 05:30:32.120854 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qlkzp" podUID="4b71d414-e6bf-4f51-a808-1938c1edf207" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 05:30:32 crc kubenswrapper[4867]: I0214 05:30:32.146950 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-29mb7" Feb 14 05:30:32 crc kubenswrapper[4867]: I0214 05:30:32.375233 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-n4l4x" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:32 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:32 crc kubenswrapper[4867]: > Feb 14 05:30:32 crc kubenswrapper[4867]: I0214 05:30:32.604314 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="38c903d9-50f6-418b-84d5-7ee82e9d1e2f" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 14 05:30:32 crc kubenswrapper[4867]: I0214 05:30:32.604841 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 14 05:30:32 crc kubenswrapper[4867]: I0214 05:30:32.613807 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"702bb86d1f52e378d22876224d381176ef1535b855223d432ee7fca7f6c8bd06"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Feb 14 05:30:32 crc kubenswrapper[4867]: I0214 05:30:32.613925 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="38c903d9-50f6-418b-84d5-7ee82e9d1e2f" containerName="cinder-scheduler" containerID="cri-o://702bb86d1f52e378d22876224d381176ef1535b855223d432ee7fca7f6c8bd06" gracePeriod=30 Feb 14 05:30:32 crc kubenswrapper[4867]: I0214 05:30:32.771271 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="505de461-9e6f-4914-bf50-e2bf4149b566" containerName="galera" probeResult="failure" output="command timed out" Feb 14 05:30:32 crc kubenswrapper[4867]: I0214 05:30:32.771364 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 14 05:30:32 crc kubenswrapper[4867]: I0214 05:30:32.784032 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="505de461-9e6f-4914-bf50-e2bf4149b566" containerName="galera" probeResult="failure" output="command timed out" Feb 14 05:30:32 crc kubenswrapper[4867]: I0214 05:30:32.784092 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 14 05:30:33 crc kubenswrapper[4867]: I0214 05:30:33.082734 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" event={"ID":"473b9472-6542-4e27-87e9-17365cd400e1","Type":"ContainerStarted","Data":"bc51a9a60243fe37db817bd1bb60afaae3afb2d44efe11d5562826c857a86b53"} Feb 14 05:30:33 crc kubenswrapper[4867]: I0214 05:30:33.083280 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" event={"ID":"473b9472-6542-4e27-87e9-17365cd400e1","Type":"ContainerStarted","Data":"ea51cf97a09727e5c9495fcbc65a1efc118d0310665ce814fa54cad90fa4e092"} Feb 14 05:30:33 crc kubenswrapper[4867]: I0214 05:30:33.085784 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"339fe681bb88adb32b1f3cac0ab3a9a7c019700102a8ea9f39f2eb6eacf010e9"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Feb 14 05:30:33 crc kubenswrapper[4867]: I0214 05:30:33.106244 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 05:30:33 crc kubenswrapper[4867]: I0214 05:30:33.131404 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 14 05:30:33 crc kubenswrapper[4867]: I0214 05:30:33.133820 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" podStartSLOduration=26.130930516 podStartE2EDuration="26.130930516s" podCreationTimestamp="2026-02-14 05:30:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 05:30:33.104183424 +0000 UTC m=+4865.185120758" watchObservedRunningTime="2026-02-14 05:30:33.130930516 +0000 UTC m=+4865.211867830" Feb 14 05:30:33 crc kubenswrapper[4867]: I0214 05:30:33.381276 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="505de461-9e6f-4914-bf50-e2bf4149b566" containerName="galera" containerID="cri-o://339fe681bb88adb32b1f3cac0ab3a9a7c019700102a8ea9f39f2eb6eacf010e9" gracePeriod=30 Feb 14 05:30:33 crc kubenswrapper[4867]: I0214 05:30:33.451772 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mrccv" Feb 14 05:30:33 crc kubenswrapper[4867]: I0214 05:30:33.451940 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mrccv" Feb 14 05:30:33 crc kubenswrapper[4867]: I0214 05:30:33.502130 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-gbzmm" podUID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:33 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:33 crc kubenswrapper[4867]: > Feb 14 05:30:34 crc kubenswrapper[4867]: I0214 05:30:34.086477 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-6nhjp" Feb 14 05:30:34 crc kubenswrapper[4867]: I0214 05:30:34.096214 4867 generic.go:334] "Generic (PLEG): container finished" podID="473b9472-6542-4e27-87e9-17365cd400e1" containerID="bc51a9a60243fe37db817bd1bb60afaae3afb2d44efe11d5562826c857a86b53" exitCode=0 Feb 14 05:30:34 crc kubenswrapper[4867]: I0214 05:30:34.096275 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" event={"ID":"473b9472-6542-4e27-87e9-17365cd400e1","Type":"ContainerDied","Data":"bc51a9a60243fe37db817bd1bb60afaae3afb2d44efe11d5562826c857a86b53"} Feb 14 05:30:34 crc kubenswrapper[4867]: I0214 05:30:34.096532 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 05:30:34 crc kubenswrapper[4867]: I0214 05:30:34.100053 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-qlkzp" Feb 14 05:30:34 crc kubenswrapper[4867]: I0214 05:30:34.516892 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-mrccv" podUID="e0fe6db4-add0-4993-a40c-c5b6725565fa" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:34 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:34 crc kubenswrapper[4867]: > Feb 14 05:30:34 crc kubenswrapper[4867]: I0214 05:30:34.542100 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7zkqz" Feb 14 05:30:35 crc kubenswrapper[4867]: I0214 05:30:35.110779 4867 generic.go:334] "Generic (PLEG): container finished" podID="38c903d9-50f6-418b-84d5-7ee82e9d1e2f" containerID="702bb86d1f52e378d22876224d381176ef1535b855223d432ee7fca7f6c8bd06" exitCode=0 Feb 14 05:30:35 crc kubenswrapper[4867]: I0214 05:30:35.110859 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"38c903d9-50f6-418b-84d5-7ee82e9d1e2f","Type":"ContainerDied","Data":"702bb86d1f52e378d22876224d381176ef1535b855223d432ee7fca7f6c8bd06"} Feb 14 05:30:35 crc kubenswrapper[4867]: I0214 05:30:35.916357 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" Feb 14 05:30:36 crc kubenswrapper[4867]: I0214 05:30:36.116665 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/473b9472-6542-4e27-87e9-17365cd400e1-secret-volume\") pod \"473b9472-6542-4e27-87e9-17365cd400e1\" (UID: \"473b9472-6542-4e27-87e9-17365cd400e1\") " Feb 14 05:30:36 crc kubenswrapper[4867]: I0214 05:30:36.117053 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddk97\" (UniqueName: \"kubernetes.io/projected/473b9472-6542-4e27-87e9-17365cd400e1-kube-api-access-ddk97\") pod \"473b9472-6542-4e27-87e9-17365cd400e1\" (UID: \"473b9472-6542-4e27-87e9-17365cd400e1\") " Feb 14 05:30:36 crc kubenswrapper[4867]: I0214 05:30:36.117162 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/473b9472-6542-4e27-87e9-17365cd400e1-config-volume\") pod \"473b9472-6542-4e27-87e9-17365cd400e1\" (UID: \"473b9472-6542-4e27-87e9-17365cd400e1\") " Feb 14 05:30:36 crc kubenswrapper[4867]: I0214 05:30:36.118561 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/473b9472-6542-4e27-87e9-17365cd400e1-config-volume" (OuterVolumeSpecName: "config-volume") pod "473b9472-6542-4e27-87e9-17365cd400e1" (UID: "473b9472-6542-4e27-87e9-17365cd400e1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 05:30:36 crc kubenswrapper[4867]: I0214 05:30:36.144681 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/473b9472-6542-4e27-87e9-17365cd400e1-kube-api-access-ddk97" (OuterVolumeSpecName: "kube-api-access-ddk97") pod "473b9472-6542-4e27-87e9-17365cd400e1" (UID: "473b9472-6542-4e27-87e9-17365cd400e1"). InnerVolumeSpecName "kube-api-access-ddk97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:30:36 crc kubenswrapper[4867]: I0214 05:30:36.149151 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" Feb 14 05:30:36 crc kubenswrapper[4867]: I0214 05:30:36.149332 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517450-67scd" event={"ID":"473b9472-6542-4e27-87e9-17365cd400e1","Type":"ContainerDied","Data":"ea51cf97a09727e5c9495fcbc65a1efc118d0310665ce814fa54cad90fa4e092"} Feb 14 05:30:36 crc kubenswrapper[4867]: I0214 05:30:36.149362 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea51cf97a09727e5c9495fcbc65a1efc118d0310665ce814fa54cad90fa4e092" Feb 14 05:30:36 crc kubenswrapper[4867]: I0214 05:30:36.155010 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/473b9472-6542-4e27-87e9-17365cd400e1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "473b9472-6542-4e27-87e9-17365cd400e1" (UID: "473b9472-6542-4e27-87e9-17365cd400e1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:30:36 crc kubenswrapper[4867]: I0214 05:30:36.220158 4867 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/473b9472-6542-4e27-87e9-17365cd400e1-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 05:30:36 crc kubenswrapper[4867]: I0214 05:30:36.220201 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddk97\" (UniqueName: \"kubernetes.io/projected/473b9472-6542-4e27-87e9-17365cd400e1-kube-api-access-ddk97\") on node \"crc\" DevicePath \"\"" Feb 14 05:30:36 crc kubenswrapper[4867]: I0214 05:30:36.220212 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/473b9472-6542-4e27-87e9-17365cd400e1-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 05:30:36 crc kubenswrapper[4867]: I0214 05:30:36.265695 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc"] Feb 14 05:30:36 crc kubenswrapper[4867]: I0214 05:30:36.285192 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517405-57nzc"] Feb 14 05:30:37 crc kubenswrapper[4867]: I0214 05:30:37.013961 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9309a87-899d-49c2-885b-9d5689c3086b" path="/var/lib/kubelet/pods/c9309a87-899d-49c2-885b-9d5689c3086b/volumes" Feb 14 05:30:37 crc kubenswrapper[4867]: I0214 05:30:37.161783 4867 generic.go:334] "Generic (PLEG): container finished" podID="505de461-9e6f-4914-bf50-e2bf4149b566" containerID="339fe681bb88adb32b1f3cac0ab3a9a7c019700102a8ea9f39f2eb6eacf010e9" exitCode=0 Feb 14 05:30:37 crc kubenswrapper[4867]: I0214 05:30:37.161825 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"505de461-9e6f-4914-bf50-e2bf4149b566","Type":"ContainerDied","Data":"339fe681bb88adb32b1f3cac0ab3a9a7c019700102a8ea9f39f2eb6eacf010e9"} Feb 14 05:30:37 crc kubenswrapper[4867]: I0214 05:30:37.161851 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"505de461-9e6f-4914-bf50-e2bf4149b566","Type":"ContainerStarted","Data":"0786b22eca9ace8c7f0637021537b8c4d7bac2e310ec10ad729a4d4b4602c81e"} Feb 14 05:30:37 crc kubenswrapper[4867]: I0214 05:30:37.796795 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-htv2n" Feb 14 05:30:38 crc kubenswrapper[4867]: I0214 05:30:38.054437 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s94ht" Feb 14 05:30:38 crc kubenswrapper[4867]: I0214 05:30:38.073495 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tcss9" Feb 14 05:30:38 crc kubenswrapper[4867]: I0214 05:30:38.149099 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dgp2v" Feb 14 05:30:38 crc kubenswrapper[4867]: I0214 05:30:38.406870 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-72mpc" Feb 14 05:30:39 crc kubenswrapper[4867]: I0214 05:30:39.190274 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"38c903d9-50f6-418b-84d5-7ee82e9d1e2f","Type":"ContainerStarted","Data":"5b5431547eb607a4a1209617a9ab1ff6fe980675998dc6ffef354f0a308a263a"} Feb 14 05:30:39 crc kubenswrapper[4867]: I0214 05:30:39.279589 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 14 05:30:39 crc kubenswrapper[4867]: I0214 05:30:39.344469 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-574c444545-stzjc" Feb 14 05:30:39 crc kubenswrapper[4867]: I0214 05:30:39.347476 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7575f7b945-9zbh8" Feb 14 05:30:39 crc kubenswrapper[4867]: I0214 05:30:39.632605 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 14 05:30:39 crc kubenswrapper[4867]: I0214 05:30:39.634131 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 14 05:30:39 crc kubenswrapper[4867]: I0214 05:30:39.747867 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-5479889c99-ltnxf" Feb 14 05:30:40 crc kubenswrapper[4867]: I0214 05:30:40.289541 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t" Feb 14 05:30:40 crc kubenswrapper[4867]: I0214 05:30:40.579736 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fwfld" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:40 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:40 crc kubenswrapper[4867]: > Feb 14 05:30:40 crc kubenswrapper[4867]: I0214 05:30:40.630743 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jj9q" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:40 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:40 crc kubenswrapper[4867]: > Feb 14 05:30:40 crc kubenswrapper[4867]: I0214 05:30:40.747212 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 14 05:30:41 crc kubenswrapper[4867]: I0214 05:30:41.343066 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7f9bfb45cb-mpxbn" Feb 14 05:30:41 crc kubenswrapper[4867]: I0214 05:30:41.393926 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 14 05:30:41 crc kubenswrapper[4867]: I0214 05:30:41.431448 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 14 05:30:41 crc kubenswrapper[4867]: I0214 05:30:41.431521 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 14 05:30:42 crc kubenswrapper[4867]: I0214 05:30:42.010539 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-9gqfb" Feb 14 05:30:42 crc kubenswrapper[4867]: I0214 05:30:42.148408 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 14 05:30:42 crc kubenswrapper[4867]: I0214 05:30:42.307485 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-n4l4x" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:42 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:42 crc kubenswrapper[4867]: > Feb 14 05:30:43 crc kubenswrapper[4867]: I0214 05:30:43.081020 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 14 05:30:43 crc kubenswrapper[4867]: I0214 05:30:43.489451 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-4hvw7" Feb 14 05:30:43 crc kubenswrapper[4867]: I0214 05:30:43.913111 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-gbzmm" podUID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:43 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:43 crc kubenswrapper[4867]: > Feb 14 05:30:44 crc kubenswrapper[4867]: I0214 05:30:44.385587 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 14 05:30:44 crc kubenswrapper[4867]: I0214 05:30:44.507865 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-mrccv" podUID="e0fe6db4-add0-4993-a40c-c5b6725565fa" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:44 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:44 crc kubenswrapper[4867]: > Feb 14 05:30:46 crc kubenswrapper[4867]: I0214 05:30:46.845570 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" podUID="351f0f21-497e-4c3e-99cc-30baff4e6484" containerName="oauth-openshift" containerID="cri-o://563d4e57c17a704703d730e549779becfa05a0901ceefc0c24faf0d612500998" gracePeriod=15 Feb 14 05:30:47 crc kubenswrapper[4867]: I0214 05:30:47.284133 4867 generic.go:334] "Generic (PLEG): container finished" podID="351f0f21-497e-4c3e-99cc-30baff4e6484" containerID="563d4e57c17a704703d730e549779becfa05a0901ceefc0c24faf0d612500998" exitCode=0 Feb 14 05:30:47 crc kubenswrapper[4867]: I0214 05:30:47.284185 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" event={"ID":"351f0f21-497e-4c3e-99cc-30baff4e6484","Type":"ContainerDied","Data":"563d4e57c17a704703d730e549779becfa05a0901ceefc0c24faf0d612500998"} Feb 14 05:30:48 crc kubenswrapper[4867]: I0214 05:30:48.332385 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" event={"ID":"351f0f21-497e-4c3e-99cc-30baff4e6484","Type":"ContainerStarted","Data":"8ecd1d525e321c7dcf77de95967937ad6f027cf611bd81c7d4857db407427727"} Feb 14 05:30:48 crc kubenswrapper[4867]: I0214 05:30:48.333968 4867 patch_prober.go:28] interesting pod/oauth-openshift-79479887dd-9ltbt container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.75:6443/healthz\": dial tcp 10.217.0.75:6443: connect: connection refused" start-of-body= Feb 14 05:30:48 crc kubenswrapper[4867]: I0214 05:30:48.334058 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 05:30:48 crc kubenswrapper[4867]: I0214 05:30:48.334090 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" podUID="351f0f21-497e-4c3e-99cc-30baff4e6484" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.75:6443/healthz\": dial tcp 10.217.0.75:6443: connect: connection refused" Feb 14 05:30:49 crc kubenswrapper[4867]: I0214 05:30:49.353616 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-79479887dd-9ltbt" Feb 14 05:30:50 crc kubenswrapper[4867]: I0214 05:30:50.578223 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-fwfld" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:50 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:50 crc kubenswrapper[4867]: > Feb 14 05:30:50 crc kubenswrapper[4867]: I0214 05:30:50.601551 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jj9q" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:50 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:50 crc kubenswrapper[4867]: > Feb 14 05:30:52 crc kubenswrapper[4867]: I0214 05:30:52.316051 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-n4l4x" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:52 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:52 crc kubenswrapper[4867]: > Feb 14 05:30:52 crc kubenswrapper[4867]: I0214 05:30:52.449357 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:30:52 crc kubenswrapper[4867]: I0214 05:30:52.522277 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:30:52 crc kubenswrapper[4867]: I0214 05:30:52.726533 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbzmm"] Feb 14 05:30:54 crc kubenswrapper[4867]: I0214 05:30:54.416486 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gbzmm" podUID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerName="registry-server" containerID="cri-o://02fa8e73abcf51bd71a1c91f18d3c7a2d7323bb60e9dc8dc6f9f4004369b2287" gracePeriod=2 Feb 14 05:30:54 crc kubenswrapper[4867]: I0214 05:30:54.514972 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-mrccv" podUID="e0fe6db4-add0-4993-a40c-c5b6725565fa" containerName="registry-server" probeResult="failure" output=< Feb 14 05:30:54 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:30:54 crc kubenswrapper[4867]: > Feb 14 05:30:55 crc kubenswrapper[4867]: I0214 05:30:55.444050 4867 generic.go:334] "Generic (PLEG): container finished" podID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerID="02fa8e73abcf51bd71a1c91f18d3c7a2d7323bb60e9dc8dc6f9f4004369b2287" exitCode=0 Feb 14 05:30:55 crc kubenswrapper[4867]: I0214 05:30:55.444359 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbzmm" event={"ID":"ae8a4292-e933-464b-b36d-918f43ce6f65","Type":"ContainerDied","Data":"02fa8e73abcf51bd71a1c91f18d3c7a2d7323bb60e9dc8dc6f9f4004369b2287"} Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.150482 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.205686 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae8a4292-e933-464b-b36d-918f43ce6f65-catalog-content\") pod \"ae8a4292-e933-464b-b36d-918f43ce6f65\" (UID: \"ae8a4292-e933-464b-b36d-918f43ce6f65\") " Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.205862 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-696zs\" (UniqueName: \"kubernetes.io/projected/ae8a4292-e933-464b-b36d-918f43ce6f65-kube-api-access-696zs\") pod \"ae8a4292-e933-464b-b36d-918f43ce6f65\" (UID: \"ae8a4292-e933-464b-b36d-918f43ce6f65\") " Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.205972 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae8a4292-e933-464b-b36d-918f43ce6f65-utilities\") pod \"ae8a4292-e933-464b-b36d-918f43ce6f65\" (UID: \"ae8a4292-e933-464b-b36d-918f43ce6f65\") " Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.209818 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae8a4292-e933-464b-b36d-918f43ce6f65-utilities" (OuterVolumeSpecName: "utilities") pod "ae8a4292-e933-464b-b36d-918f43ce6f65" (UID: "ae8a4292-e933-464b-b36d-918f43ce6f65"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.243762 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae8a4292-e933-464b-b36d-918f43ce6f65-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ae8a4292-e933-464b-b36d-918f43ce6f65" (UID: "ae8a4292-e933-464b-b36d-918f43ce6f65"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.250527 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae8a4292-e933-464b-b36d-918f43ce6f65-kube-api-access-696zs" (OuterVolumeSpecName: "kube-api-access-696zs") pod "ae8a4292-e933-464b-b36d-918f43ce6f65" (UID: "ae8a4292-e933-464b-b36d-918f43ce6f65"). InnerVolumeSpecName "kube-api-access-696zs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.310890 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae8a4292-e933-464b-b36d-918f43ce6f65-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.310923 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-696zs\" (UniqueName: \"kubernetes.io/projected/ae8a4292-e933-464b-b36d-918f43ce6f65-kube-api-access-696zs\") on node \"crc\" DevicePath \"\"" Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.310934 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae8a4292-e933-464b-b36d-918f43ce6f65-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.460418 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbzmm" event={"ID":"ae8a4292-e933-464b-b36d-918f43ce6f65","Type":"ContainerDied","Data":"47cdca75a2ba0f821663d76cef9b19a6564e32fa60be6d56b7f13820ba0f0910"} Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.460498 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gbzmm" Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.464334 4867 scope.go:117] "RemoveContainer" containerID="02fa8e73abcf51bd71a1c91f18d3c7a2d7323bb60e9dc8dc6f9f4004369b2287" Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.505006 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbzmm"] Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.513232 4867 scope.go:117] "RemoveContainer" containerID="d8dba4d88b5c6eecbec89d7feae83ad9606443736a1880bc3a3ef22fc521b479" Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.521122 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbzmm"] Feb 14 05:30:56 crc kubenswrapper[4867]: I0214 05:30:56.556644 4867 scope.go:117] "RemoveContainer" containerID="8c243a37aff3c02c559e404368152638ab794bc475ff69a09f55fcd9db332faf" Feb 14 05:30:57 crc kubenswrapper[4867]: I0214 05:30:57.013300 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae8a4292-e933-464b-b36d-918f43ce6f65" path="/var/lib/kubelet/pods/ae8a4292-e933-464b-b36d-918f43ce6f65/volumes" Feb 14 05:30:58 crc kubenswrapper[4867]: I0214 05:30:58.170976 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rv8cb" Feb 14 05:30:59 crc kubenswrapper[4867]: I0214 05:30:59.564455 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:30:59 crc kubenswrapper[4867]: I0214 05:30:59.632014 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:30:59 crc kubenswrapper[4867]: I0214 05:30:59.808857 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fwfld"] Feb 14 05:31:00 crc kubenswrapper[4867]: I0214 05:31:00.598066 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jj9q" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" probeResult="failure" output=< Feb 14 05:31:00 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:31:00 crc kubenswrapper[4867]: > Feb 14 05:31:00 crc kubenswrapper[4867]: I0214 05:31:00.766831 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-67594686f4-52kwb" Feb 14 05:31:01 crc kubenswrapper[4867]: I0214 05:31:01.250623 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:31:01 crc kubenswrapper[4867]: I0214 05:31:01.250706 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:31:01 crc kubenswrapper[4867]: I0214 05:31:01.250771 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 05:31:01 crc kubenswrapper[4867]: I0214 05:31:01.252369 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"de23552d651bd266665fca3b2536d2046c3c2309b2c56fb5a66759067df0e4c8"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 05:31:01 crc kubenswrapper[4867]: I0214 05:31:01.252456 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://de23552d651bd266665fca3b2536d2046c3c2309b2c56fb5a66759067df0e4c8" gracePeriod=600 Feb 14 05:31:01 crc kubenswrapper[4867]: I0214 05:31:01.325795 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:31:01 crc kubenswrapper[4867]: I0214 05:31:01.409929 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:31:01 crc kubenswrapper[4867]: I0214 05:31:01.526584 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="de23552d651bd266665fca3b2536d2046c3c2309b2c56fb5a66759067df0e4c8" exitCode=0 Feb 14 05:31:01 crc kubenswrapper[4867]: I0214 05:31:01.526672 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"de23552d651bd266665fca3b2536d2046c3c2309b2c56fb5a66759067df0e4c8"} Feb 14 05:31:01 crc kubenswrapper[4867]: I0214 05:31:01.526740 4867 scope.go:117] "RemoveContainer" containerID="ec987150c85caa2259b5e07a0130f2569d95269321a91ae517f52e3f4caa949a" Feb 14 05:31:01 crc kubenswrapper[4867]: I0214 05:31:01.527085 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fwfld" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerName="registry-server" containerID="cri-o://d32a85def59446e2aea01a95f3cfe819170da1f9922b0c56fc8dbc92b574234c" gracePeriod=2 Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.216852 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n4l4x"] Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.447096 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.548179 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a"} Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.551651 4867 generic.go:334] "Generic (PLEG): container finished" podID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerID="d32a85def59446e2aea01a95f3cfe819170da1f9922b0c56fc8dbc92b574234c" exitCode=0 Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.551731 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwfld" event={"ID":"09ba042e-98c3-43cc-aa6a-efbb9a63ae61","Type":"ContainerDied","Data":"d32a85def59446e2aea01a95f3cfe819170da1f9922b0c56fc8dbc92b574234c"} Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.551809 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n4l4x" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerName="registry-server" containerID="cri-o://87e6c040d38bde68e493b7b3302f19e7c726d33e11f4f16178f8c9d7adfc5f47" gracePeriod=2 Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.551755 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fwfld" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.551881 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fwfld" event={"ID":"09ba042e-98c3-43cc-aa6a-efbb9a63ae61","Type":"ContainerDied","Data":"611fc79292fb2762358fe75567d94939459a2919b3fc494b0f725c85bd01c821"} Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.551915 4867 scope.go:117] "RemoveContainer" containerID="d32a85def59446e2aea01a95f3cfe819170da1f9922b0c56fc8dbc92b574234c" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.580827 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-catalog-content\") pod \"09ba042e-98c3-43cc-aa6a-efbb9a63ae61\" (UID: \"09ba042e-98c3-43cc-aa6a-efbb9a63ae61\") " Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.588809 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-utilities\") pod \"09ba042e-98c3-43cc-aa6a-efbb9a63ae61\" (UID: \"09ba042e-98c3-43cc-aa6a-efbb9a63ae61\") " Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.589108 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5kn\" (UniqueName: \"kubernetes.io/projected/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-kube-api-access-mg5kn\") pod \"09ba042e-98c3-43cc-aa6a-efbb9a63ae61\" (UID: \"09ba042e-98c3-43cc-aa6a-efbb9a63ae61\") " Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.593790 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-utilities" (OuterVolumeSpecName: "utilities") pod "09ba042e-98c3-43cc-aa6a-efbb9a63ae61" (UID: "09ba042e-98c3-43cc-aa6a-efbb9a63ae61"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.619444 4867 scope.go:117] "RemoveContainer" containerID="e54727b5bf92a59032c5529b8aae9e9aaa32e613387911a5fa36f0cd61a385b3" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.620976 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-kube-api-access-mg5kn" (OuterVolumeSpecName: "kube-api-access-mg5kn") pod "09ba042e-98c3-43cc-aa6a-efbb9a63ae61" (UID: "09ba042e-98c3-43cc-aa6a-efbb9a63ae61"). InnerVolumeSpecName "kube-api-access-mg5kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.695084 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.695152 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5kn\" (UniqueName: \"kubernetes.io/projected/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-kube-api-access-mg5kn\") on node \"crc\" DevicePath \"\"" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.781207 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "09ba042e-98c3-43cc-aa6a-efbb9a63ae61" (UID: "09ba042e-98c3-43cc-aa6a-efbb9a63ae61"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.847256 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09ba042e-98c3-43cc-aa6a-efbb9a63ae61-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.854903 4867 scope.go:117] "RemoveContainer" containerID="5cb79fe74b93324d918674ab2692becf5fd9a155cfb9970da26b3cebb5355a9d" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.928747 4867 scope.go:117] "RemoveContainer" containerID="d32a85def59446e2aea01a95f3cfe819170da1f9922b0c56fc8dbc92b574234c" Feb 14 05:31:02 crc kubenswrapper[4867]: E0214 05:31:02.944202 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d32a85def59446e2aea01a95f3cfe819170da1f9922b0c56fc8dbc92b574234c\": container with ID starting with d32a85def59446e2aea01a95f3cfe819170da1f9922b0c56fc8dbc92b574234c not found: ID does not exist" containerID="d32a85def59446e2aea01a95f3cfe819170da1f9922b0c56fc8dbc92b574234c" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.944740 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d32a85def59446e2aea01a95f3cfe819170da1f9922b0c56fc8dbc92b574234c"} err="failed to get container status \"d32a85def59446e2aea01a95f3cfe819170da1f9922b0c56fc8dbc92b574234c\": rpc error: code = NotFound desc = could not find container \"d32a85def59446e2aea01a95f3cfe819170da1f9922b0c56fc8dbc92b574234c\": container with ID starting with d32a85def59446e2aea01a95f3cfe819170da1f9922b0c56fc8dbc92b574234c not found: ID does not exist" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.944773 4867 scope.go:117] "RemoveContainer" containerID="e54727b5bf92a59032c5529b8aae9e9aaa32e613387911a5fa36f0cd61a385b3" Feb 14 05:31:02 crc kubenswrapper[4867]: E0214 05:31:02.945431 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e54727b5bf92a59032c5529b8aae9e9aaa32e613387911a5fa36f0cd61a385b3\": container with ID starting with e54727b5bf92a59032c5529b8aae9e9aaa32e613387911a5fa36f0cd61a385b3 not found: ID does not exist" containerID="e54727b5bf92a59032c5529b8aae9e9aaa32e613387911a5fa36f0cd61a385b3" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.945474 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e54727b5bf92a59032c5529b8aae9e9aaa32e613387911a5fa36f0cd61a385b3"} err="failed to get container status \"e54727b5bf92a59032c5529b8aae9e9aaa32e613387911a5fa36f0cd61a385b3\": rpc error: code = NotFound desc = could not find container \"e54727b5bf92a59032c5529b8aae9e9aaa32e613387911a5fa36f0cd61a385b3\": container with ID starting with e54727b5bf92a59032c5529b8aae9e9aaa32e613387911a5fa36f0cd61a385b3 not found: ID does not exist" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.945520 4867 scope.go:117] "RemoveContainer" containerID="5cb79fe74b93324d918674ab2692becf5fd9a155cfb9970da26b3cebb5355a9d" Feb 14 05:31:02 crc kubenswrapper[4867]: E0214 05:31:02.946778 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cb79fe74b93324d918674ab2692becf5fd9a155cfb9970da26b3cebb5355a9d\": container with ID starting with 5cb79fe74b93324d918674ab2692becf5fd9a155cfb9970da26b3cebb5355a9d not found: ID does not exist" containerID="5cb79fe74b93324d918674ab2692becf5fd9a155cfb9970da26b3cebb5355a9d" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.946807 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cb79fe74b93324d918674ab2692becf5fd9a155cfb9970da26b3cebb5355a9d"} err="failed to get container status \"5cb79fe74b93324d918674ab2692becf5fd9a155cfb9970da26b3cebb5355a9d\": rpc error: code = NotFound desc = could not find container \"5cb79fe74b93324d918674ab2692becf5fd9a155cfb9970da26b3cebb5355a9d\": container with ID starting with 5cb79fe74b93324d918674ab2692becf5fd9a155cfb9970da26b3cebb5355a9d not found: ID does not exist" Feb 14 05:31:02 crc kubenswrapper[4867]: I0214 05:31:02.989307 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fwfld"] Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.060019 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fwfld"] Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.257653 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.359729 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-utilities\") pod \"c07eb1e9-f4cc-4664-b9f6-80322fe0644a\" (UID: \"c07eb1e9-f4cc-4664-b9f6-80322fe0644a\") " Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.360069 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5kf9\" (UniqueName: \"kubernetes.io/projected/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-kube-api-access-p5kf9\") pod \"c07eb1e9-f4cc-4664-b9f6-80322fe0644a\" (UID: \"c07eb1e9-f4cc-4664-b9f6-80322fe0644a\") " Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.360537 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-catalog-content\") pod \"c07eb1e9-f4cc-4664-b9f6-80322fe0644a\" (UID: \"c07eb1e9-f4cc-4664-b9f6-80322fe0644a\") " Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.380821 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-kube-api-access-p5kf9" (OuterVolumeSpecName: "kube-api-access-p5kf9") pod "c07eb1e9-f4cc-4664-b9f6-80322fe0644a" (UID: "c07eb1e9-f4cc-4664-b9f6-80322fe0644a"). InnerVolumeSpecName "kube-api-access-p5kf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.385187 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-utilities" (OuterVolumeSpecName: "utilities") pod "c07eb1e9-f4cc-4664-b9f6-80322fe0644a" (UID: "c07eb1e9-f4cc-4664-b9f6-80322fe0644a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.461436 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c07eb1e9-f4cc-4664-b9f6-80322fe0644a" (UID: "c07eb1e9-f4cc-4664-b9f6-80322fe0644a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.465114 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.465188 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5kf9\" (UniqueName: \"kubernetes.io/projected/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-kube-api-access-p5kf9\") on node \"crc\" DevicePath \"\"" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.465206 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c07eb1e9-f4cc-4664-b9f6-80322fe0644a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.522306 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mrccv" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.591189 4867 generic.go:334] "Generic (PLEG): container finished" podID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerID="87e6c040d38bde68e493b7b3302f19e7c726d33e11f4f16178f8c9d7adfc5f47" exitCode=0 Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.592408 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n4l4x" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.593616 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n4l4x" event={"ID":"c07eb1e9-f4cc-4664-b9f6-80322fe0644a","Type":"ContainerDied","Data":"87e6c040d38bde68e493b7b3302f19e7c726d33e11f4f16178f8c9d7adfc5f47"} Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.593691 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n4l4x" event={"ID":"c07eb1e9-f4cc-4664-b9f6-80322fe0644a","Type":"ContainerDied","Data":"5ba318c0f038dd00ef73874b614866123801539825c20b7ed97427c3db408ff8"} Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.593721 4867 scope.go:117] "RemoveContainer" containerID="87e6c040d38bde68e493b7b3302f19e7c726d33e11f4f16178f8c9d7adfc5f47" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.625768 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mrccv" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.669612 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n4l4x"] Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.684717 4867 scope.go:117] "RemoveContainer" containerID="1a1f79e7d0e49fdf6f916b2defe58abde42138b8a7f873554959ac654f97cab7" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.741632 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n4l4x"] Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.876754 4867 scope.go:117] "RemoveContainer" containerID="36bcae6bb363439549f24488fba4f5cff8ec4aa55cfcc0e02fab4feb7920c86f" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.959865 4867 scope.go:117] "RemoveContainer" containerID="87e6c040d38bde68e493b7b3302f19e7c726d33e11f4f16178f8c9d7adfc5f47" Feb 14 05:31:03 crc kubenswrapper[4867]: E0214 05:31:03.960695 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87e6c040d38bde68e493b7b3302f19e7c726d33e11f4f16178f8c9d7adfc5f47\": container with ID starting with 87e6c040d38bde68e493b7b3302f19e7c726d33e11f4f16178f8c9d7adfc5f47 not found: ID does not exist" containerID="87e6c040d38bde68e493b7b3302f19e7c726d33e11f4f16178f8c9d7adfc5f47" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.960732 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87e6c040d38bde68e493b7b3302f19e7c726d33e11f4f16178f8c9d7adfc5f47"} err="failed to get container status \"87e6c040d38bde68e493b7b3302f19e7c726d33e11f4f16178f8c9d7adfc5f47\": rpc error: code = NotFound desc = could not find container \"87e6c040d38bde68e493b7b3302f19e7c726d33e11f4f16178f8c9d7adfc5f47\": container with ID starting with 87e6c040d38bde68e493b7b3302f19e7c726d33e11f4f16178f8c9d7adfc5f47 not found: ID does not exist" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.960757 4867 scope.go:117] "RemoveContainer" containerID="1a1f79e7d0e49fdf6f916b2defe58abde42138b8a7f873554959ac654f97cab7" Feb 14 05:31:03 crc kubenswrapper[4867]: E0214 05:31:03.961027 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a1f79e7d0e49fdf6f916b2defe58abde42138b8a7f873554959ac654f97cab7\": container with ID starting with 1a1f79e7d0e49fdf6f916b2defe58abde42138b8a7f873554959ac654f97cab7 not found: ID does not exist" containerID="1a1f79e7d0e49fdf6f916b2defe58abde42138b8a7f873554959ac654f97cab7" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.961064 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a1f79e7d0e49fdf6f916b2defe58abde42138b8a7f873554959ac654f97cab7"} err="failed to get container status \"1a1f79e7d0e49fdf6f916b2defe58abde42138b8a7f873554959ac654f97cab7\": rpc error: code = NotFound desc = could not find container \"1a1f79e7d0e49fdf6f916b2defe58abde42138b8a7f873554959ac654f97cab7\": container with ID starting with 1a1f79e7d0e49fdf6f916b2defe58abde42138b8a7f873554959ac654f97cab7 not found: ID does not exist" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.961084 4867 scope.go:117] "RemoveContainer" containerID="36bcae6bb363439549f24488fba4f5cff8ec4aa55cfcc0e02fab4feb7920c86f" Feb 14 05:31:03 crc kubenswrapper[4867]: E0214 05:31:03.961381 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36bcae6bb363439549f24488fba4f5cff8ec4aa55cfcc0e02fab4feb7920c86f\": container with ID starting with 36bcae6bb363439549f24488fba4f5cff8ec4aa55cfcc0e02fab4feb7920c86f not found: ID does not exist" containerID="36bcae6bb363439549f24488fba4f5cff8ec4aa55cfcc0e02fab4feb7920c86f" Feb 14 05:31:03 crc kubenswrapper[4867]: I0214 05:31:03.961414 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36bcae6bb363439549f24488fba4f5cff8ec4aa55cfcc0e02fab4feb7920c86f"} err="failed to get container status \"36bcae6bb363439549f24488fba4f5cff8ec4aa55cfcc0e02fab4feb7920c86f\": rpc error: code = NotFound desc = could not find container \"36bcae6bb363439549f24488fba4f5cff8ec4aa55cfcc0e02fab4feb7920c86f\": container with ID starting with 36bcae6bb363439549f24488fba4f5cff8ec4aa55cfcc0e02fab4feb7920c86f not found: ID does not exist" Feb 14 05:31:05 crc kubenswrapper[4867]: I0214 05:31:05.011008 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" path="/var/lib/kubelet/pods/09ba042e-98c3-43cc-aa6a-efbb9a63ae61/volumes" Feb 14 05:31:05 crc kubenswrapper[4867]: I0214 05:31:05.013191 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" path="/var/lib/kubelet/pods/c07eb1e9-f4cc-4664-b9f6-80322fe0644a/volumes" Feb 14 05:31:10 crc kubenswrapper[4867]: I0214 05:31:10.655564 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jj9q" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" probeResult="failure" output=< Feb 14 05:31:10 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:31:10 crc kubenswrapper[4867]: > Feb 14 05:31:17 crc kubenswrapper[4867]: I0214 05:31:17.970096 4867 scope.go:117] "RemoveContainer" containerID="ab4ee5d7ccbbb8ee4ad53cb2ebd2a425cf55cf8aed22876c6ecd5b2b84a7972a" Feb 14 05:31:20 crc kubenswrapper[4867]: I0214 05:31:20.690652 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jj9q" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" probeResult="failure" output=< Feb 14 05:31:20 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:31:20 crc kubenswrapper[4867]: > Feb 14 05:31:20 crc kubenswrapper[4867]: I0214 05:31:20.692753 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:31:20 crc kubenswrapper[4867]: I0214 05:31:20.694056 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"0af814f84e64b35babeb4457762bbfc3989cb29f290cec6370bec1b95e729f03"} pod="openshift-marketplace/redhat-operators-9jj9q" containerMessage="Container registry-server failed startup probe, will be restarted" Feb 14 05:31:20 crc kubenswrapper[4867]: I0214 05:31:20.694214 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9jj9q" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" containerID="cri-o://0af814f84e64b35babeb4457762bbfc3989cb29f290cec6370bec1b95e729f03" gracePeriod=30 Feb 14 05:31:33 crc kubenswrapper[4867]: I0214 05:31:33.939403 4867 generic.go:334] "Generic (PLEG): container finished" podID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerID="0af814f84e64b35babeb4457762bbfc3989cb29f290cec6370bec1b95e729f03" exitCode=0 Feb 14 05:31:33 crc kubenswrapper[4867]: I0214 05:31:33.939498 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jj9q" event={"ID":"3532ff4a-374c-407b-b01c-b63267b0f9f9","Type":"ContainerDied","Data":"0af814f84e64b35babeb4457762bbfc3989cb29f290cec6370bec1b95e729f03"} Feb 14 05:31:35 crc kubenswrapper[4867]: I0214 05:31:35.973421 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jj9q" event={"ID":"3532ff4a-374c-407b-b01c-b63267b0f9f9","Type":"ContainerStarted","Data":"6f275a36fbe27cd89bd6f963bc54c915a722d81138ab06e240ac5d200b94ad27"} Feb 14 05:31:39 crc kubenswrapper[4867]: I0214 05:31:39.545056 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:31:39 crc kubenswrapper[4867]: I0214 05:31:39.545615 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:31:40 crc kubenswrapper[4867]: I0214 05:31:40.595011 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jj9q" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" probeResult="failure" output=< Feb 14 05:31:40 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:31:40 crc kubenswrapper[4867]: > Feb 14 05:31:50 crc kubenswrapper[4867]: I0214 05:31:50.592613 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jj9q" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" probeResult="failure" output=< Feb 14 05:31:50 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:31:50 crc kubenswrapper[4867]: > Feb 14 05:32:00 crc kubenswrapper[4867]: I0214 05:32:00.595027 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jj9q" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" probeResult="failure" output=< Feb 14 05:32:00 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:32:00 crc kubenswrapper[4867]: > Feb 14 05:32:10 crc kubenswrapper[4867]: I0214 05:32:10.121374 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:32:10 crc kubenswrapper[4867]: I0214 05:32:10.191114 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:32:10 crc kubenswrapper[4867]: I0214 05:32:10.435464 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9jj9q"] Feb 14 05:32:11 crc kubenswrapper[4867]: I0214 05:32:11.418095 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9jj9q" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" containerID="cri-o://6f275a36fbe27cd89bd6f963bc54c915a722d81138ab06e240ac5d200b94ad27" gracePeriod=2 Feb 14 05:32:12 crc kubenswrapper[4867]: I0214 05:32:12.484373 4867 generic.go:334] "Generic (PLEG): container finished" podID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerID="6f275a36fbe27cd89bd6f963bc54c915a722d81138ab06e240ac5d200b94ad27" exitCode=0 Feb 14 05:32:12 crc kubenswrapper[4867]: I0214 05:32:12.485163 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jj9q" event={"ID":"3532ff4a-374c-407b-b01c-b63267b0f9f9","Type":"ContainerDied","Data":"6f275a36fbe27cd89bd6f963bc54c915a722d81138ab06e240ac5d200b94ad27"} Feb 14 05:32:12 crc kubenswrapper[4867]: I0214 05:32:12.486782 4867 scope.go:117] "RemoveContainer" containerID="0af814f84e64b35babeb4457762bbfc3989cb29f290cec6370bec1b95e729f03" Feb 14 05:32:12 crc kubenswrapper[4867]: I0214 05:32:12.602482 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:32:12 crc kubenswrapper[4867]: I0214 05:32:12.758841 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3532ff4a-374c-407b-b01c-b63267b0f9f9-utilities\") pod \"3532ff4a-374c-407b-b01c-b63267b0f9f9\" (UID: \"3532ff4a-374c-407b-b01c-b63267b0f9f9\") " Feb 14 05:32:12 crc kubenswrapper[4867]: I0214 05:32:12.759486 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dcv6\" (UniqueName: \"kubernetes.io/projected/3532ff4a-374c-407b-b01c-b63267b0f9f9-kube-api-access-6dcv6\") pod \"3532ff4a-374c-407b-b01c-b63267b0f9f9\" (UID: \"3532ff4a-374c-407b-b01c-b63267b0f9f9\") " Feb 14 05:32:12 crc kubenswrapper[4867]: I0214 05:32:12.759585 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3532ff4a-374c-407b-b01c-b63267b0f9f9-catalog-content\") pod \"3532ff4a-374c-407b-b01c-b63267b0f9f9\" (UID: \"3532ff4a-374c-407b-b01c-b63267b0f9f9\") " Feb 14 05:32:12 crc kubenswrapper[4867]: I0214 05:32:12.759901 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3532ff4a-374c-407b-b01c-b63267b0f9f9-utilities" (OuterVolumeSpecName: "utilities") pod "3532ff4a-374c-407b-b01c-b63267b0f9f9" (UID: "3532ff4a-374c-407b-b01c-b63267b0f9f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:32:12 crc kubenswrapper[4867]: I0214 05:32:12.778425 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3532ff4a-374c-407b-b01c-b63267b0f9f9-kube-api-access-6dcv6" (OuterVolumeSpecName: "kube-api-access-6dcv6") pod "3532ff4a-374c-407b-b01c-b63267b0f9f9" (UID: "3532ff4a-374c-407b-b01c-b63267b0f9f9"). InnerVolumeSpecName "kube-api-access-6dcv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:32:12 crc kubenswrapper[4867]: I0214 05:32:12.863397 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3532ff4a-374c-407b-b01c-b63267b0f9f9-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:32:12 crc kubenswrapper[4867]: I0214 05:32:12.863431 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dcv6\" (UniqueName: \"kubernetes.io/projected/3532ff4a-374c-407b-b01c-b63267b0f9f9-kube-api-access-6dcv6\") on node \"crc\" DevicePath \"\"" Feb 14 05:32:12 crc kubenswrapper[4867]: I0214 05:32:12.906361 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3532ff4a-374c-407b-b01c-b63267b0f9f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3532ff4a-374c-407b-b01c-b63267b0f9f9" (UID: "3532ff4a-374c-407b-b01c-b63267b0f9f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:32:12 crc kubenswrapper[4867]: I0214 05:32:12.965082 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3532ff4a-374c-407b-b01c-b63267b0f9f9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:32:13 crc kubenswrapper[4867]: I0214 05:32:13.503777 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jj9q" event={"ID":"3532ff4a-374c-407b-b01c-b63267b0f9f9","Type":"ContainerDied","Data":"6b53ea8d4257c47786cd3a09e618ae66005b213cde9dca1141144554e272f271"} Feb 14 05:32:13 crc kubenswrapper[4867]: I0214 05:32:13.504250 4867 scope.go:117] "RemoveContainer" containerID="6f275a36fbe27cd89bd6f963bc54c915a722d81138ab06e240ac5d200b94ad27" Feb 14 05:32:13 crc kubenswrapper[4867]: I0214 05:32:13.504400 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9jj9q" Feb 14 05:32:13 crc kubenswrapper[4867]: I0214 05:32:13.533068 4867 scope.go:117] "RemoveContainer" containerID="b68d87e77e9726db128cb19314bb5165ed9c15cd0be74610a3fa6b601224ffbc" Feb 14 05:32:13 crc kubenswrapper[4867]: I0214 05:32:13.565302 4867 scope.go:117] "RemoveContainer" containerID="3ccc1ca8b5aa695fffe9a70b7b97042dbfab6774339fb2708f08dce70c3af3d0" Feb 14 05:32:13 crc kubenswrapper[4867]: I0214 05:32:13.576536 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9jj9q"] Feb 14 05:32:13 crc kubenswrapper[4867]: I0214 05:32:13.593414 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9jj9q"] Feb 14 05:32:15 crc kubenswrapper[4867]: I0214 05:32:15.021030 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" path="/var/lib/kubelet/pods/3532ff4a-374c-407b-b01c-b63267b0f9f9/volumes" Feb 14 05:32:47 crc kubenswrapper[4867]: I0214 05:32:47.929826 4867 generic.go:334] "Generic (PLEG): container finished" podID="652d53d9-a4c0-4061-b817-ca5173785521" containerID="075b79918bc2f91b3a5dae96c88d4b1fcea3cd1da542c02c4a8dfaa3b4541715" exitCode=0 Feb 14 05:32:47 crc kubenswrapper[4867]: I0214 05:32:47.929935 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" event={"ID":"652d53d9-a4c0-4061-b817-ca5173785521","Type":"ContainerDied","Data":"075b79918bc2f91b3a5dae96c88d4b1fcea3cd1da542c02c4a8dfaa3b4541715"} Feb 14 05:32:48 crc kubenswrapper[4867]: I0214 05:32:48.960685 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" event={"ID":"652d53d9-a4c0-4061-b817-ca5173785521","Type":"ContainerStarted","Data":"ebe3d08837c845b7ee5ed212ba8dbb14e4590da7452a878dc78de2a88b4b09a9"} Feb 14 05:33:01 crc kubenswrapper[4867]: I0214 05:33:01.251640 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:33:01 crc kubenswrapper[4867]: I0214 05:33:01.252291 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:33:06 crc kubenswrapper[4867]: I0214 05:33:06.449140 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 05:33:06 crc kubenswrapper[4867]: I0214 05:33:06.449736 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 05:33:26 crc kubenswrapper[4867]: I0214 05:33:26.454642 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 05:33:26 crc kubenswrapper[4867]: I0214 05:33:26.459156 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-76ddc659b-tzdtd" Feb 14 05:33:31 crc kubenswrapper[4867]: I0214 05:33:31.250861 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:33:31 crc kubenswrapper[4867]: I0214 05:33:31.251572 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:34:01 crc kubenswrapper[4867]: I0214 05:34:01.251596 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:34:01 crc kubenswrapper[4867]: I0214 05:34:01.252931 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:34:01 crc kubenswrapper[4867]: I0214 05:34:01.253067 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 05:34:01 crc kubenswrapper[4867]: I0214 05:34:01.255051 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 05:34:01 crc kubenswrapper[4867]: I0214 05:34:01.255211 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" gracePeriod=600 Feb 14 05:34:01 crc kubenswrapper[4867]: E0214 05:34:01.379643 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:34:01 crc kubenswrapper[4867]: I0214 05:34:01.887865 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" exitCode=0 Feb 14 05:34:01 crc kubenswrapper[4867]: I0214 05:34:01.887946 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a"} Feb 14 05:34:01 crc kubenswrapper[4867]: I0214 05:34:01.888337 4867 scope.go:117] "RemoveContainer" containerID="de23552d651bd266665fca3b2536d2046c3c2309b2c56fb5a66759067df0e4c8" Feb 14 05:34:01 crc kubenswrapper[4867]: I0214 05:34:01.889374 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:34:01 crc kubenswrapper[4867]: E0214 05:34:01.889786 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:34:14 crc kubenswrapper[4867]: I0214 05:34:14.997715 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:34:14 crc kubenswrapper[4867]: E0214 05:34:14.999539 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:34:29 crc kubenswrapper[4867]: I0214 05:34:29.997755 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:34:30 crc kubenswrapper[4867]: E0214 05:34:30.000578 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:34:40 crc kubenswrapper[4867]: I0214 05:34:40.998862 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:34:41 crc kubenswrapper[4867]: E0214 05:34:40.999421 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:34:55 crc kubenswrapper[4867]: I0214 05:34:55.997748 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:34:55 crc kubenswrapper[4867]: E0214 05:34:55.998716 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:35:09 crc kubenswrapper[4867]: I0214 05:35:09.997238 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:35:09 crc kubenswrapper[4867]: E0214 05:35:09.998137 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:35:22 crc kubenswrapper[4867]: I0214 05:35:22.998375 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:35:23 crc kubenswrapper[4867]: E0214 05:35:22.999395 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:35:37 crc kubenswrapper[4867]: I0214 05:35:37.997120 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:35:37 crc kubenswrapper[4867]: E0214 05:35:37.998149 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:35:49 crc kubenswrapper[4867]: I0214 05:35:49.007182 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:35:49 crc kubenswrapper[4867]: E0214 05:35:49.009803 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:36:03 crc kubenswrapper[4867]: I0214 05:36:03.003703 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:36:03 crc kubenswrapper[4867]: E0214 05:36:03.010177 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:36:16 crc kubenswrapper[4867]: I0214 05:36:16.997446 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:36:16 crc kubenswrapper[4867]: E0214 05:36:16.999669 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:36:30 crc kubenswrapper[4867]: I0214 05:36:30.484734 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:36:30 crc kubenswrapper[4867]: E0214 05:36:30.485874 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:36:44 crc kubenswrapper[4867]: I0214 05:36:43.998263 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:36:44 crc kubenswrapper[4867]: E0214 05:36:43.999378 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:36:56 crc kubenswrapper[4867]: I0214 05:36:56.998240 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:36:57 crc kubenswrapper[4867]: E0214 05:36:57.000050 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:37:11 crc kubenswrapper[4867]: I0214 05:37:11.997524 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:37:11 crc kubenswrapper[4867]: E0214 05:37:11.998526 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:37:22 crc kubenswrapper[4867]: I0214 05:37:22.998155 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:37:22 crc kubenswrapper[4867]: E0214 05:37:22.999143 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:37:34 crc kubenswrapper[4867]: I0214 05:37:34.997980 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:37:35 crc kubenswrapper[4867]: E0214 05:37:34.998955 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:37:47 crc kubenswrapper[4867]: I0214 05:37:47.000802 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:37:47 crc kubenswrapper[4867]: E0214 05:37:47.001767 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:38:01 crc kubenswrapper[4867]: I0214 05:38:01.001238 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:38:01 crc kubenswrapper[4867]: E0214 05:38:01.002578 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:38:12 crc kubenswrapper[4867]: I0214 05:38:12.997760 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:38:12 crc kubenswrapper[4867]: E0214 05:38:12.998933 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:38:23 crc kubenswrapper[4867]: I0214 05:38:23.998629 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:38:24 crc kubenswrapper[4867]: E0214 05:38:24.000087 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:38:34 crc kubenswrapper[4867]: I0214 05:38:34.999084 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:38:35 crc kubenswrapper[4867]: E0214 05:38:35.000588 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:38:45 crc kubenswrapper[4867]: I0214 05:38:45.998490 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:38:46 crc kubenswrapper[4867]: E0214 05:38:45.999487 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:38:56 crc kubenswrapper[4867]: I0214 05:38:56.997650 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:38:56 crc kubenswrapper[4867]: E0214 05:38:56.998909 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:39:09 crc kubenswrapper[4867]: I0214 05:39:09.998071 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:39:10 crc kubenswrapper[4867]: I0214 05:39:10.793449 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"f5d63b1271ea439ba7c2f7514281f50c704e327b66fe9d213dc7e443134b610b"} Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.035176 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c4zxt"] Feb 14 05:40:34 crc kubenswrapper[4867]: E0214 05:40:34.039453 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="extract-content" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.039481 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="extract-content" Feb 14 05:40:34 crc kubenswrapper[4867]: E0214 05:40:34.039500 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.039526 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" Feb 14 05:40:34 crc kubenswrapper[4867]: E0214 05:40:34.039542 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerName="registry-server" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.039549 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerName="registry-server" Feb 14 05:40:34 crc kubenswrapper[4867]: E0214 05:40:34.039562 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerName="extract-utilities" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.039569 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerName="extract-utilities" Feb 14 05:40:34 crc kubenswrapper[4867]: E0214 05:40:34.039595 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerName="extract-content" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.039604 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerName="extract-content" Feb 14 05:40:34 crc kubenswrapper[4867]: E0214 05:40:34.039623 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerName="extract-content" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.039631 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerName="extract-content" Feb 14 05:40:34 crc kubenswrapper[4867]: E0214 05:40:34.039641 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="extract-utilities" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.039648 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="extract-utilities" Feb 14 05:40:34 crc kubenswrapper[4867]: E0214 05:40:34.039658 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerName="registry-server" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.039666 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerName="registry-server" Feb 14 05:40:34 crc kubenswrapper[4867]: E0214 05:40:34.039678 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerName="registry-server" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.039685 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerName="registry-server" Feb 14 05:40:34 crc kubenswrapper[4867]: E0214 05:40:34.039712 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.039720 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" Feb 14 05:40:34 crc kubenswrapper[4867]: E0214 05:40:34.039738 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerName="extract-utilities" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.039746 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerName="extract-utilities" Feb 14 05:40:34 crc kubenswrapper[4867]: E0214 05:40:34.039761 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="473b9472-6542-4e27-87e9-17365cd400e1" containerName="collect-profiles" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.039769 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="473b9472-6542-4e27-87e9-17365cd400e1" containerName="collect-profiles" Feb 14 05:40:34 crc kubenswrapper[4867]: E0214 05:40:34.039790 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerName="extract-utilities" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.039798 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerName="extract-utilities" Feb 14 05:40:34 crc kubenswrapper[4867]: E0214 05:40:34.039809 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerName="extract-content" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.039817 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerName="extract-content" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.040093 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="09ba042e-98c3-43cc-aa6a-efbb9a63ae61" containerName="registry-server" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.040115 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="473b9472-6542-4e27-87e9-17365cd400e1" containerName="collect-profiles" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.040134 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.040150 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="3532ff4a-374c-407b-b01c-b63267b0f9f9" containerName="registry-server" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.040158 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c07eb1e9-f4cc-4664-b9f6-80322fe0644a" containerName="registry-server" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.040165 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae8a4292-e933-464b-b36d-918f43ce6f65" containerName="registry-server" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.044659 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.106076 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c4zxt"] Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.198730 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tgsr\" (UniqueName: \"kubernetes.io/projected/1623abf8-a3d2-4598-8f39-f0153f263393-kube-api-access-2tgsr\") pod \"community-operators-c4zxt\" (UID: \"1623abf8-a3d2-4598-8f39-f0153f263393\") " pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.199177 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1623abf8-a3d2-4598-8f39-f0153f263393-utilities\") pod \"community-operators-c4zxt\" (UID: \"1623abf8-a3d2-4598-8f39-f0153f263393\") " pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.199534 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1623abf8-a3d2-4598-8f39-f0153f263393-catalog-content\") pod \"community-operators-c4zxt\" (UID: \"1623abf8-a3d2-4598-8f39-f0153f263393\") " pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.302121 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1623abf8-a3d2-4598-8f39-f0153f263393-catalog-content\") pod \"community-operators-c4zxt\" (UID: \"1623abf8-a3d2-4598-8f39-f0153f263393\") " pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.302249 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tgsr\" (UniqueName: \"kubernetes.io/projected/1623abf8-a3d2-4598-8f39-f0153f263393-kube-api-access-2tgsr\") pod \"community-operators-c4zxt\" (UID: \"1623abf8-a3d2-4598-8f39-f0153f263393\") " pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.302337 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1623abf8-a3d2-4598-8f39-f0153f263393-utilities\") pod \"community-operators-c4zxt\" (UID: \"1623abf8-a3d2-4598-8f39-f0153f263393\") " pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.304270 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1623abf8-a3d2-4598-8f39-f0153f263393-catalog-content\") pod \"community-operators-c4zxt\" (UID: \"1623abf8-a3d2-4598-8f39-f0153f263393\") " pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.304642 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1623abf8-a3d2-4598-8f39-f0153f263393-utilities\") pod \"community-operators-c4zxt\" (UID: \"1623abf8-a3d2-4598-8f39-f0153f263393\") " pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.324731 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tgsr\" (UniqueName: \"kubernetes.io/projected/1623abf8-a3d2-4598-8f39-f0153f263393-kube-api-access-2tgsr\") pod \"community-operators-c4zxt\" (UID: \"1623abf8-a3d2-4598-8f39-f0153f263393\") " pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:40:34 crc kubenswrapper[4867]: I0214 05:40:34.371600 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:40:35 crc kubenswrapper[4867]: I0214 05:40:35.932404 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c4zxt"] Feb 14 05:40:35 crc kubenswrapper[4867]: W0214 05:40:35.944260 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1623abf8_a3d2_4598_8f39_f0153f263393.slice/crio-295e96f9f07d3095cea3a700b623e45a0a3c5905cbf092c822537e6b819d4532 WatchSource:0}: Error finding container 295e96f9f07d3095cea3a700b623e45a0a3c5905cbf092c822537e6b819d4532: Status 404 returned error can't find the container with id 295e96f9f07d3095cea3a700b623e45a0a3c5905cbf092c822537e6b819d4532 Feb 14 05:40:36 crc kubenswrapper[4867]: I0214 05:40:36.942471 4867 generic.go:334] "Generic (PLEG): container finished" podID="1623abf8-a3d2-4598-8f39-f0153f263393" containerID="196ca742dcc703f46deb1d50ebb9f9afbcb2cb52b7aa66003ca89e4afaf13dc4" exitCode=0 Feb 14 05:40:36 crc kubenswrapper[4867]: I0214 05:40:36.942586 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4zxt" event={"ID":"1623abf8-a3d2-4598-8f39-f0153f263393","Type":"ContainerDied","Data":"196ca742dcc703f46deb1d50ebb9f9afbcb2cb52b7aa66003ca89e4afaf13dc4"} Feb 14 05:40:36 crc kubenswrapper[4867]: I0214 05:40:36.942803 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4zxt" event={"ID":"1623abf8-a3d2-4598-8f39-f0153f263393","Type":"ContainerStarted","Data":"295e96f9f07d3095cea3a700b623e45a0a3c5905cbf092c822537e6b819d4532"} Feb 14 05:40:36 crc kubenswrapper[4867]: I0214 05:40:36.946915 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 05:40:38 crc kubenswrapper[4867]: I0214 05:40:38.968645 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4zxt" event={"ID":"1623abf8-a3d2-4598-8f39-f0153f263393","Type":"ContainerStarted","Data":"cbe326a8e5634578b70f7f6afe4763f8fc03fbfab3802a9533507439c097bf40"} Feb 14 05:40:40 crc kubenswrapper[4867]: I0214 05:40:40.380005 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tlqjg"] Feb 14 05:40:40 crc kubenswrapper[4867]: I0214 05:40:40.384727 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:40:40 crc kubenswrapper[4867]: I0214 05:40:40.396499 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tlqjg"] Feb 14 05:40:40 crc kubenswrapper[4867]: I0214 05:40:40.564431 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-catalog-content\") pod \"certified-operators-tlqjg\" (UID: \"c89371c3-d8bf-4ac1-8b52-9df945ca0c87\") " pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:40:40 crc kubenswrapper[4867]: I0214 05:40:40.564800 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-utilities\") pod \"certified-operators-tlqjg\" (UID: \"c89371c3-d8bf-4ac1-8b52-9df945ca0c87\") " pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:40:40 crc kubenswrapper[4867]: I0214 05:40:40.564952 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd5n8\" (UniqueName: \"kubernetes.io/projected/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-kube-api-access-kd5n8\") pod \"certified-operators-tlqjg\" (UID: \"c89371c3-d8bf-4ac1-8b52-9df945ca0c87\") " pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:40:40 crc kubenswrapper[4867]: I0214 05:40:40.667782 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-utilities\") pod \"certified-operators-tlqjg\" (UID: \"c89371c3-d8bf-4ac1-8b52-9df945ca0c87\") " pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:40:40 crc kubenswrapper[4867]: I0214 05:40:40.667977 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd5n8\" (UniqueName: \"kubernetes.io/projected/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-kube-api-access-kd5n8\") pod \"certified-operators-tlqjg\" (UID: \"c89371c3-d8bf-4ac1-8b52-9df945ca0c87\") " pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:40:40 crc kubenswrapper[4867]: I0214 05:40:40.668158 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-catalog-content\") pod \"certified-operators-tlqjg\" (UID: \"c89371c3-d8bf-4ac1-8b52-9df945ca0c87\") " pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:40:40 crc kubenswrapper[4867]: I0214 05:40:40.670429 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-utilities\") pod \"certified-operators-tlqjg\" (UID: \"c89371c3-d8bf-4ac1-8b52-9df945ca0c87\") " pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:40:40 crc kubenswrapper[4867]: I0214 05:40:40.670583 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-catalog-content\") pod \"certified-operators-tlqjg\" (UID: \"c89371c3-d8bf-4ac1-8b52-9df945ca0c87\") " pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:40:40 crc kubenswrapper[4867]: I0214 05:40:40.688029 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd5n8\" (UniqueName: \"kubernetes.io/projected/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-kube-api-access-kd5n8\") pod \"certified-operators-tlqjg\" (UID: \"c89371c3-d8bf-4ac1-8b52-9df945ca0c87\") " pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:40:40 crc kubenswrapper[4867]: I0214 05:40:40.735326 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:40:41 crc kubenswrapper[4867]: I0214 05:40:41.904899 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tlqjg"] Feb 14 05:40:41 crc kubenswrapper[4867]: W0214 05:40:41.922831 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc89371c3_d8bf_4ac1_8b52_9df945ca0c87.slice/crio-2e3ad0b986cd4719f281090d06d73a636c10ee3dac7a89d41cd182d5abad5524 WatchSource:0}: Error finding container 2e3ad0b986cd4719f281090d06d73a636c10ee3dac7a89d41cd182d5abad5524: Status 404 returned error can't find the container with id 2e3ad0b986cd4719f281090d06d73a636c10ee3dac7a89d41cd182d5abad5524 Feb 14 05:40:42 crc kubenswrapper[4867]: I0214 05:40:42.012537 4867 generic.go:334] "Generic (PLEG): container finished" podID="1623abf8-a3d2-4598-8f39-f0153f263393" containerID="cbe326a8e5634578b70f7f6afe4763f8fc03fbfab3802a9533507439c097bf40" exitCode=0 Feb 14 05:40:42 crc kubenswrapper[4867]: I0214 05:40:42.012658 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4zxt" event={"ID":"1623abf8-a3d2-4598-8f39-f0153f263393","Type":"ContainerDied","Data":"cbe326a8e5634578b70f7f6afe4763f8fc03fbfab3802a9533507439c097bf40"} Feb 14 05:40:42 crc kubenswrapper[4867]: I0214 05:40:42.019120 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlqjg" event={"ID":"c89371c3-d8bf-4ac1-8b52-9df945ca0c87","Type":"ContainerStarted","Data":"2e3ad0b986cd4719f281090d06d73a636c10ee3dac7a89d41cd182d5abad5524"} Feb 14 05:40:43 crc kubenswrapper[4867]: I0214 05:40:43.032385 4867 generic.go:334] "Generic (PLEG): container finished" podID="c89371c3-d8bf-4ac1-8b52-9df945ca0c87" containerID="3bc3e643c5429ba5ba3a05589e465e07b9fffdc8f4d33443aa5bd143360e4eb3" exitCode=0 Feb 14 05:40:43 crc kubenswrapper[4867]: I0214 05:40:43.032894 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlqjg" event={"ID":"c89371c3-d8bf-4ac1-8b52-9df945ca0c87","Type":"ContainerDied","Data":"3bc3e643c5429ba5ba3a05589e465e07b9fffdc8f4d33443aa5bd143360e4eb3"} Feb 14 05:40:43 crc kubenswrapper[4867]: I0214 05:40:43.037923 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4zxt" event={"ID":"1623abf8-a3d2-4598-8f39-f0153f263393","Type":"ContainerStarted","Data":"cc44a1a3222d6deb16349071be26b927d02318057d20a59ca7cbee80422066fa"} Feb 14 05:40:43 crc kubenswrapper[4867]: I0214 05:40:43.094650 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c4zxt" podStartSLOduration=4.362208947 podStartE2EDuration="10.093625546s" podCreationTimestamp="2026-02-14 05:40:33 +0000 UTC" firstStartedPulling="2026-02-14 05:40:36.94653862 +0000 UTC m=+5469.027475944" lastFinishedPulling="2026-02-14 05:40:42.677955229 +0000 UTC m=+5474.758892543" observedRunningTime="2026-02-14 05:40:43.08616936 +0000 UTC m=+5475.167106674" watchObservedRunningTime="2026-02-14 05:40:43.093625546 +0000 UTC m=+5475.174562860" Feb 14 05:40:44 crc kubenswrapper[4867]: I0214 05:40:44.050821 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlqjg" event={"ID":"c89371c3-d8bf-4ac1-8b52-9df945ca0c87","Type":"ContainerStarted","Data":"cb9fea3befa5bf44f897f98aba6d284869bf350bf21600e5d93ec69f31f91067"} Feb 14 05:40:44 crc kubenswrapper[4867]: I0214 05:40:44.372006 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:40:44 crc kubenswrapper[4867]: I0214 05:40:44.372670 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:40:45 crc kubenswrapper[4867]: I0214 05:40:45.445370 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-c4zxt" podUID="1623abf8-a3d2-4598-8f39-f0153f263393" containerName="registry-server" probeResult="failure" output=< Feb 14 05:40:45 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:40:45 crc kubenswrapper[4867]: > Feb 14 05:40:48 crc kubenswrapper[4867]: I0214 05:40:48.116180 4867 generic.go:334] "Generic (PLEG): container finished" podID="c89371c3-d8bf-4ac1-8b52-9df945ca0c87" containerID="cb9fea3befa5bf44f897f98aba6d284869bf350bf21600e5d93ec69f31f91067" exitCode=0 Feb 14 05:40:48 crc kubenswrapper[4867]: I0214 05:40:48.116279 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlqjg" event={"ID":"c89371c3-d8bf-4ac1-8b52-9df945ca0c87","Type":"ContainerDied","Data":"cb9fea3befa5bf44f897f98aba6d284869bf350bf21600e5d93ec69f31f91067"} Feb 14 05:40:49 crc kubenswrapper[4867]: I0214 05:40:49.132840 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlqjg" event={"ID":"c89371c3-d8bf-4ac1-8b52-9df945ca0c87","Type":"ContainerStarted","Data":"db0b1076448b0cf8a4ffa6679332907db4bdde3817f84bbb2e6e7c141e2f4ef9"} Feb 14 05:40:49 crc kubenswrapper[4867]: I0214 05:40:49.158786 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tlqjg" podStartSLOduration=3.507649541 podStartE2EDuration="9.158763982s" podCreationTimestamp="2026-02-14 05:40:40 +0000 UTC" firstStartedPulling="2026-02-14 05:40:43.034877495 +0000 UTC m=+5475.115814809" lastFinishedPulling="2026-02-14 05:40:48.685991936 +0000 UTC m=+5480.766929250" observedRunningTime="2026-02-14 05:40:49.153770481 +0000 UTC m=+5481.234707805" watchObservedRunningTime="2026-02-14 05:40:49.158763982 +0000 UTC m=+5481.239701306" Feb 14 05:40:50 crc kubenswrapper[4867]: I0214 05:40:50.736405 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:40:50 crc kubenswrapper[4867]: I0214 05:40:50.737085 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:40:51 crc kubenswrapper[4867]: I0214 05:40:51.798615 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-tlqjg" podUID="c89371c3-d8bf-4ac1-8b52-9df945ca0c87" containerName="registry-server" probeResult="failure" output=< Feb 14 05:40:51 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:40:51 crc kubenswrapper[4867]: > Feb 14 05:40:55 crc kubenswrapper[4867]: I0214 05:40:55.460158 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-c4zxt" podUID="1623abf8-a3d2-4598-8f39-f0153f263393" containerName="registry-server" probeResult="failure" output=< Feb 14 05:40:55 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:40:55 crc kubenswrapper[4867]: > Feb 14 05:41:01 crc kubenswrapper[4867]: I0214 05:41:01.785467 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-tlqjg" podUID="c89371c3-d8bf-4ac1-8b52-9df945ca0c87" containerName="registry-server" probeResult="failure" output=< Feb 14 05:41:01 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:41:01 crc kubenswrapper[4867]: > Feb 14 05:41:05 crc kubenswrapper[4867]: I0214 05:41:05.436440 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-c4zxt" podUID="1623abf8-a3d2-4598-8f39-f0153f263393" containerName="registry-server" probeResult="failure" output=< Feb 14 05:41:05 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:41:05 crc kubenswrapper[4867]: > Feb 14 05:41:10 crc kubenswrapper[4867]: I0214 05:41:10.788164 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:41:10 crc kubenswrapper[4867]: I0214 05:41:10.912221 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:41:11 crc kubenswrapper[4867]: I0214 05:41:11.596998 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tlqjg"] Feb 14 05:41:12 crc kubenswrapper[4867]: I0214 05:41:12.449798 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tlqjg" podUID="c89371c3-d8bf-4ac1-8b52-9df945ca0c87" containerName="registry-server" containerID="cri-o://db0b1076448b0cf8a4ffa6679332907db4bdde3817f84bbb2e6e7c141e2f4ef9" gracePeriod=2 Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.058008 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.179472 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-utilities\") pod \"c89371c3-d8bf-4ac1-8b52-9df945ca0c87\" (UID: \"c89371c3-d8bf-4ac1-8b52-9df945ca0c87\") " Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.180093 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-catalog-content\") pod \"c89371c3-d8bf-4ac1-8b52-9df945ca0c87\" (UID: \"c89371c3-d8bf-4ac1-8b52-9df945ca0c87\") " Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.180367 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd5n8\" (UniqueName: \"kubernetes.io/projected/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-kube-api-access-kd5n8\") pod \"c89371c3-d8bf-4ac1-8b52-9df945ca0c87\" (UID: \"c89371c3-d8bf-4ac1-8b52-9df945ca0c87\") " Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.180707 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-utilities" (OuterVolumeSpecName: "utilities") pod "c89371c3-d8bf-4ac1-8b52-9df945ca0c87" (UID: "c89371c3-d8bf-4ac1-8b52-9df945ca0c87"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.183791 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.194159 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-kube-api-access-kd5n8" (OuterVolumeSpecName: "kube-api-access-kd5n8") pod "c89371c3-d8bf-4ac1-8b52-9df945ca0c87" (UID: "c89371c3-d8bf-4ac1-8b52-9df945ca0c87"). InnerVolumeSpecName "kube-api-access-kd5n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.247027 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c89371c3-d8bf-4ac1-8b52-9df945ca0c87" (UID: "c89371c3-d8bf-4ac1-8b52-9df945ca0c87"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.288015 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.288053 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kd5n8\" (UniqueName: \"kubernetes.io/projected/c89371c3-d8bf-4ac1-8b52-9df945ca0c87-kube-api-access-kd5n8\") on node \"crc\" DevicePath \"\"" Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.466681 4867 generic.go:334] "Generic (PLEG): container finished" podID="c89371c3-d8bf-4ac1-8b52-9df945ca0c87" containerID="db0b1076448b0cf8a4ffa6679332907db4bdde3817f84bbb2e6e7c141e2f4ef9" exitCode=0 Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.466812 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tlqjg" Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.466845 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlqjg" event={"ID":"c89371c3-d8bf-4ac1-8b52-9df945ca0c87","Type":"ContainerDied","Data":"db0b1076448b0cf8a4ffa6679332907db4bdde3817f84bbb2e6e7c141e2f4ef9"} Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.469981 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlqjg" event={"ID":"c89371c3-d8bf-4ac1-8b52-9df945ca0c87","Type":"ContainerDied","Data":"2e3ad0b986cd4719f281090d06d73a636c10ee3dac7a89d41cd182d5abad5524"} Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.470032 4867 scope.go:117] "RemoveContainer" containerID="db0b1076448b0cf8a4ffa6679332907db4bdde3817f84bbb2e6e7c141e2f4ef9" Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.530184 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tlqjg"] Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.538157 4867 scope.go:117] "RemoveContainer" containerID="cb9fea3befa5bf44f897f98aba6d284869bf350bf21600e5d93ec69f31f91067" Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.543467 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tlqjg"] Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.563883 4867 scope.go:117] "RemoveContainer" containerID="3bc3e643c5429ba5ba3a05589e465e07b9fffdc8f4d33443aa5bd143360e4eb3" Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.647610 4867 scope.go:117] "RemoveContainer" containerID="db0b1076448b0cf8a4ffa6679332907db4bdde3817f84bbb2e6e7c141e2f4ef9" Feb 14 05:41:13 crc kubenswrapper[4867]: E0214 05:41:13.649093 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db0b1076448b0cf8a4ffa6679332907db4bdde3817f84bbb2e6e7c141e2f4ef9\": container with ID starting with db0b1076448b0cf8a4ffa6679332907db4bdde3817f84bbb2e6e7c141e2f4ef9 not found: ID does not exist" containerID="db0b1076448b0cf8a4ffa6679332907db4bdde3817f84bbb2e6e7c141e2f4ef9" Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.649538 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db0b1076448b0cf8a4ffa6679332907db4bdde3817f84bbb2e6e7c141e2f4ef9"} err="failed to get container status \"db0b1076448b0cf8a4ffa6679332907db4bdde3817f84bbb2e6e7c141e2f4ef9\": rpc error: code = NotFound desc = could not find container \"db0b1076448b0cf8a4ffa6679332907db4bdde3817f84bbb2e6e7c141e2f4ef9\": container with ID starting with db0b1076448b0cf8a4ffa6679332907db4bdde3817f84bbb2e6e7c141e2f4ef9 not found: ID does not exist" Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.649587 4867 scope.go:117] "RemoveContainer" containerID="cb9fea3befa5bf44f897f98aba6d284869bf350bf21600e5d93ec69f31f91067" Feb 14 05:41:13 crc kubenswrapper[4867]: E0214 05:41:13.650174 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb9fea3befa5bf44f897f98aba6d284869bf350bf21600e5d93ec69f31f91067\": container with ID starting with cb9fea3befa5bf44f897f98aba6d284869bf350bf21600e5d93ec69f31f91067 not found: ID does not exist" containerID="cb9fea3befa5bf44f897f98aba6d284869bf350bf21600e5d93ec69f31f91067" Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.650232 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb9fea3befa5bf44f897f98aba6d284869bf350bf21600e5d93ec69f31f91067"} err="failed to get container status \"cb9fea3befa5bf44f897f98aba6d284869bf350bf21600e5d93ec69f31f91067\": rpc error: code = NotFound desc = could not find container \"cb9fea3befa5bf44f897f98aba6d284869bf350bf21600e5d93ec69f31f91067\": container with ID starting with cb9fea3befa5bf44f897f98aba6d284869bf350bf21600e5d93ec69f31f91067 not found: ID does not exist" Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.650280 4867 scope.go:117] "RemoveContainer" containerID="3bc3e643c5429ba5ba3a05589e465e07b9fffdc8f4d33443aa5bd143360e4eb3" Feb 14 05:41:13 crc kubenswrapper[4867]: E0214 05:41:13.650785 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bc3e643c5429ba5ba3a05589e465e07b9fffdc8f4d33443aa5bd143360e4eb3\": container with ID starting with 3bc3e643c5429ba5ba3a05589e465e07b9fffdc8f4d33443aa5bd143360e4eb3 not found: ID does not exist" containerID="3bc3e643c5429ba5ba3a05589e465e07b9fffdc8f4d33443aa5bd143360e4eb3" Feb 14 05:41:13 crc kubenswrapper[4867]: I0214 05:41:13.650827 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bc3e643c5429ba5ba3a05589e465e07b9fffdc8f4d33443aa5bd143360e4eb3"} err="failed to get container status \"3bc3e643c5429ba5ba3a05589e465e07b9fffdc8f4d33443aa5bd143360e4eb3\": rpc error: code = NotFound desc = could not find container \"3bc3e643c5429ba5ba3a05589e465e07b9fffdc8f4d33443aa5bd143360e4eb3\": container with ID starting with 3bc3e643c5429ba5ba3a05589e465e07b9fffdc8f4d33443aa5bd143360e4eb3 not found: ID does not exist" Feb 14 05:41:14 crc kubenswrapper[4867]: I0214 05:41:14.444205 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:41:14 crc kubenswrapper[4867]: I0214 05:41:14.506549 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:41:15 crc kubenswrapper[4867]: I0214 05:41:15.019532 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c89371c3-d8bf-4ac1-8b52-9df945ca0c87" path="/var/lib/kubelet/pods/c89371c3-d8bf-4ac1-8b52-9df945ca0c87/volumes" Feb 14 05:41:15 crc kubenswrapper[4867]: I0214 05:41:15.991835 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c4zxt"] Feb 14 05:41:15 crc kubenswrapper[4867]: I0214 05:41:15.992229 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c4zxt" podUID="1623abf8-a3d2-4598-8f39-f0153f263393" containerName="registry-server" containerID="cri-o://cc44a1a3222d6deb16349071be26b927d02318057d20a59ca7cbee80422066fa" gracePeriod=2 Feb 14 05:41:16 crc kubenswrapper[4867]: I0214 05:41:16.512375 4867 generic.go:334] "Generic (PLEG): container finished" podID="1623abf8-a3d2-4598-8f39-f0153f263393" containerID="cc44a1a3222d6deb16349071be26b927d02318057d20a59ca7cbee80422066fa" exitCode=0 Feb 14 05:41:16 crc kubenswrapper[4867]: I0214 05:41:16.512430 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4zxt" event={"ID":"1623abf8-a3d2-4598-8f39-f0153f263393","Type":"ContainerDied","Data":"cc44a1a3222d6deb16349071be26b927d02318057d20a59ca7cbee80422066fa"} Feb 14 05:41:16 crc kubenswrapper[4867]: I0214 05:41:16.512937 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c4zxt" event={"ID":"1623abf8-a3d2-4598-8f39-f0153f263393","Type":"ContainerDied","Data":"295e96f9f07d3095cea3a700b623e45a0a3c5905cbf092c822537e6b819d4532"} Feb 14 05:41:16 crc kubenswrapper[4867]: I0214 05:41:16.512962 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="295e96f9f07d3095cea3a700b623e45a0a3c5905cbf092c822537e6b819d4532" Feb 14 05:41:16 crc kubenswrapper[4867]: I0214 05:41:16.561463 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:41:16 crc kubenswrapper[4867]: I0214 05:41:16.697035 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1623abf8-a3d2-4598-8f39-f0153f263393-utilities\") pod \"1623abf8-a3d2-4598-8f39-f0153f263393\" (UID: \"1623abf8-a3d2-4598-8f39-f0153f263393\") " Feb 14 05:41:16 crc kubenswrapper[4867]: I0214 05:41:16.697160 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1623abf8-a3d2-4598-8f39-f0153f263393-catalog-content\") pod \"1623abf8-a3d2-4598-8f39-f0153f263393\" (UID: \"1623abf8-a3d2-4598-8f39-f0153f263393\") " Feb 14 05:41:16 crc kubenswrapper[4867]: I0214 05:41:16.697262 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tgsr\" (UniqueName: \"kubernetes.io/projected/1623abf8-a3d2-4598-8f39-f0153f263393-kube-api-access-2tgsr\") pod \"1623abf8-a3d2-4598-8f39-f0153f263393\" (UID: \"1623abf8-a3d2-4598-8f39-f0153f263393\") " Feb 14 05:41:16 crc kubenswrapper[4867]: I0214 05:41:16.703948 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1623abf8-a3d2-4598-8f39-f0153f263393-kube-api-access-2tgsr" (OuterVolumeSpecName: "kube-api-access-2tgsr") pod "1623abf8-a3d2-4598-8f39-f0153f263393" (UID: "1623abf8-a3d2-4598-8f39-f0153f263393"). InnerVolumeSpecName "kube-api-access-2tgsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:41:16 crc kubenswrapper[4867]: I0214 05:41:16.709739 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1623abf8-a3d2-4598-8f39-f0153f263393-utilities" (OuterVolumeSpecName: "utilities") pod "1623abf8-a3d2-4598-8f39-f0153f263393" (UID: "1623abf8-a3d2-4598-8f39-f0153f263393"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:41:16 crc kubenswrapper[4867]: I0214 05:41:16.748372 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1623abf8-a3d2-4598-8f39-f0153f263393-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1623abf8-a3d2-4598-8f39-f0153f263393" (UID: "1623abf8-a3d2-4598-8f39-f0153f263393"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:41:16 crc kubenswrapper[4867]: I0214 05:41:16.801107 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1623abf8-a3d2-4598-8f39-f0153f263393-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:41:16 crc kubenswrapper[4867]: I0214 05:41:16.801571 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1623abf8-a3d2-4598-8f39-f0153f263393-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:41:16 crc kubenswrapper[4867]: I0214 05:41:16.801584 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tgsr\" (UniqueName: \"kubernetes.io/projected/1623abf8-a3d2-4598-8f39-f0153f263393-kube-api-access-2tgsr\") on node \"crc\" DevicePath \"\"" Feb 14 05:41:17 crc kubenswrapper[4867]: I0214 05:41:17.522312 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c4zxt" Feb 14 05:41:17 crc kubenswrapper[4867]: I0214 05:41:17.550814 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c4zxt"] Feb 14 05:41:17 crc kubenswrapper[4867]: I0214 05:41:17.562061 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c4zxt"] Feb 14 05:41:19 crc kubenswrapper[4867]: I0214 05:41:19.019219 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1623abf8-a3d2-4598-8f39-f0153f263393" path="/var/lib/kubelet/pods/1623abf8-a3d2-4598-8f39-f0153f263393/volumes" Feb 14 05:41:31 crc kubenswrapper[4867]: I0214 05:41:31.251462 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:41:31 crc kubenswrapper[4867]: I0214 05:41:31.252175 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.734685 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hnp9l"] Feb 14 05:41:52 crc kubenswrapper[4867]: E0214 05:41:52.762374 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1623abf8-a3d2-4598-8f39-f0153f263393" containerName="extract-content" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.762406 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1623abf8-a3d2-4598-8f39-f0153f263393" containerName="extract-content" Feb 14 05:41:52 crc kubenswrapper[4867]: E0214 05:41:52.762431 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1623abf8-a3d2-4598-8f39-f0153f263393" containerName="registry-server" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.762440 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1623abf8-a3d2-4598-8f39-f0153f263393" containerName="registry-server" Feb 14 05:41:52 crc kubenswrapper[4867]: E0214 05:41:52.762527 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1623abf8-a3d2-4598-8f39-f0153f263393" containerName="extract-utilities" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.762537 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="1623abf8-a3d2-4598-8f39-f0153f263393" containerName="extract-utilities" Feb 14 05:41:52 crc kubenswrapper[4867]: E0214 05:41:52.762548 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c89371c3-d8bf-4ac1-8b52-9df945ca0c87" containerName="registry-server" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.762555 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c89371c3-d8bf-4ac1-8b52-9df945ca0c87" containerName="registry-server" Feb 14 05:41:52 crc kubenswrapper[4867]: E0214 05:41:52.762579 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c89371c3-d8bf-4ac1-8b52-9df945ca0c87" containerName="extract-content" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.762586 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c89371c3-d8bf-4ac1-8b52-9df945ca0c87" containerName="extract-content" Feb 14 05:41:52 crc kubenswrapper[4867]: E0214 05:41:52.762670 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c89371c3-d8bf-4ac1-8b52-9df945ca0c87" containerName="extract-utilities" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.762681 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="c89371c3-d8bf-4ac1-8b52-9df945ca0c87" containerName="extract-utilities" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.763099 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="c89371c3-d8bf-4ac1-8b52-9df945ca0c87" containerName="registry-server" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.763138 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="1623abf8-a3d2-4598-8f39-f0153f263393" containerName="registry-server" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.766879 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnp9l"] Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.767023 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.842139 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-catalog-content\") pod \"redhat-marketplace-hnp9l\" (UID: \"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70\") " pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.842436 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spttv\" (UniqueName: \"kubernetes.io/projected/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-kube-api-access-spttv\") pod \"redhat-marketplace-hnp9l\" (UID: \"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70\") " pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.842595 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-utilities\") pod \"redhat-marketplace-hnp9l\" (UID: \"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70\") " pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.945543 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-catalog-content\") pod \"redhat-marketplace-hnp9l\" (UID: \"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70\") " pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.945668 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spttv\" (UniqueName: \"kubernetes.io/projected/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-kube-api-access-spttv\") pod \"redhat-marketplace-hnp9l\" (UID: \"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70\") " pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.945721 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-utilities\") pod \"redhat-marketplace-hnp9l\" (UID: \"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70\") " pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.946307 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-catalog-content\") pod \"redhat-marketplace-hnp9l\" (UID: \"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70\") " pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.946404 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-utilities\") pod \"redhat-marketplace-hnp9l\" (UID: \"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70\") " pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:41:52 crc kubenswrapper[4867]: I0214 05:41:52.981107 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spttv\" (UniqueName: \"kubernetes.io/projected/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-kube-api-access-spttv\") pod \"redhat-marketplace-hnp9l\" (UID: \"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70\") " pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:41:53 crc kubenswrapper[4867]: I0214 05:41:53.102017 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:41:53 crc kubenswrapper[4867]: I0214 05:41:53.684525 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnp9l"] Feb 14 05:41:53 crc kubenswrapper[4867]: I0214 05:41:53.977447 4867 generic.go:334] "Generic (PLEG): container finished" podID="cfa44170-d9b0-46a8-a2bb-8c6fa355cf70" containerID="c714a6fab8fc461e14d0f2f11c7a7e01cce0430791be8faf274c90db20b0ebbc" exitCode=0 Feb 14 05:41:53 crc kubenswrapper[4867]: I0214 05:41:53.977536 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnp9l" event={"ID":"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70","Type":"ContainerDied","Data":"c714a6fab8fc461e14d0f2f11c7a7e01cce0430791be8faf274c90db20b0ebbc"} Feb 14 05:41:53 crc kubenswrapper[4867]: I0214 05:41:53.977915 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnp9l" event={"ID":"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70","Type":"ContainerStarted","Data":"03b63e6e338e97fd57df5fde6fda2a32cf024491536c59c6434b27784e69fdbf"} Feb 14 05:41:56 crc kubenswrapper[4867]: I0214 05:41:56.002214 4867 generic.go:334] "Generic (PLEG): container finished" podID="cfa44170-d9b0-46a8-a2bb-8c6fa355cf70" containerID="d3ee72f5184f22da9e9fbc4f5a4589f4a519e499fcc04c5522f39b89cd0b7684" exitCode=0 Feb 14 05:41:56 crc kubenswrapper[4867]: I0214 05:41:56.002315 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnp9l" event={"ID":"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70","Type":"ContainerDied","Data":"d3ee72f5184f22da9e9fbc4f5a4589f4a519e499fcc04c5522f39b89cd0b7684"} Feb 14 05:41:57 crc kubenswrapper[4867]: I0214 05:41:57.018426 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnp9l" event={"ID":"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70","Type":"ContainerStarted","Data":"53c4a9ed5aa0cad33a1da047be151c421c199488e1e73c551860333703ddc24e"} Feb 14 05:41:57 crc kubenswrapper[4867]: I0214 05:41:57.043116 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hnp9l" podStartSLOduration=2.645893405 podStartE2EDuration="5.04309463s" podCreationTimestamp="2026-02-14 05:41:52 +0000 UTC" firstStartedPulling="2026-02-14 05:41:53.979548569 +0000 UTC m=+5546.060485883" lastFinishedPulling="2026-02-14 05:41:56.376749794 +0000 UTC m=+5548.457687108" observedRunningTime="2026-02-14 05:41:57.036049235 +0000 UTC m=+5549.116986549" watchObservedRunningTime="2026-02-14 05:41:57.04309463 +0000 UTC m=+5549.124031944" Feb 14 05:42:01 crc kubenswrapper[4867]: I0214 05:42:01.251074 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:42:01 crc kubenswrapper[4867]: I0214 05:42:01.251922 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:42:03 crc kubenswrapper[4867]: I0214 05:42:03.103116 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:42:03 crc kubenswrapper[4867]: I0214 05:42:03.103773 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:42:03 crc kubenswrapper[4867]: I0214 05:42:03.172485 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:42:04 crc kubenswrapper[4867]: I0214 05:42:04.196456 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:42:04 crc kubenswrapper[4867]: I0214 05:42:04.290987 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnp9l"] Feb 14 05:42:06 crc kubenswrapper[4867]: I0214 05:42:06.146611 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hnp9l" podUID="cfa44170-d9b0-46a8-a2bb-8c6fa355cf70" containerName="registry-server" containerID="cri-o://53c4a9ed5aa0cad33a1da047be151c421c199488e1e73c551860333703ddc24e" gracePeriod=2 Feb 14 05:42:06 crc kubenswrapper[4867]: I0214 05:42:06.737999 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:42:06 crc kubenswrapper[4867]: I0214 05:42:06.846836 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-utilities\") pod \"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70\" (UID: \"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70\") " Feb 14 05:42:06 crc kubenswrapper[4867]: I0214 05:42:06.846890 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-catalog-content\") pod \"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70\" (UID: \"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70\") " Feb 14 05:42:06 crc kubenswrapper[4867]: I0214 05:42:06.847167 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spttv\" (UniqueName: \"kubernetes.io/projected/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-kube-api-access-spttv\") pod \"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70\" (UID: \"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70\") " Feb 14 05:42:06 crc kubenswrapper[4867]: I0214 05:42:06.850011 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-utilities" (OuterVolumeSpecName: "utilities") pod "cfa44170-d9b0-46a8-a2bb-8c6fa355cf70" (UID: "cfa44170-d9b0-46a8-a2bb-8c6fa355cf70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:42:06 crc kubenswrapper[4867]: I0214 05:42:06.858847 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-kube-api-access-spttv" (OuterVolumeSpecName: "kube-api-access-spttv") pod "cfa44170-d9b0-46a8-a2bb-8c6fa355cf70" (UID: "cfa44170-d9b0-46a8-a2bb-8c6fa355cf70"). InnerVolumeSpecName "kube-api-access-spttv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:42:06 crc kubenswrapper[4867]: I0214 05:42:06.878096 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cfa44170-d9b0-46a8-a2bb-8c6fa355cf70" (UID: "cfa44170-d9b0-46a8-a2bb-8c6fa355cf70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:42:06 crc kubenswrapper[4867]: I0214 05:42:06.950775 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:42:06 crc kubenswrapper[4867]: I0214 05:42:06.950833 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:42:06 crc kubenswrapper[4867]: I0214 05:42:06.950847 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spttv\" (UniqueName: \"kubernetes.io/projected/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70-kube-api-access-spttv\") on node \"crc\" DevicePath \"\"" Feb 14 05:42:07 crc kubenswrapper[4867]: I0214 05:42:07.161299 4867 generic.go:334] "Generic (PLEG): container finished" podID="cfa44170-d9b0-46a8-a2bb-8c6fa355cf70" containerID="53c4a9ed5aa0cad33a1da047be151c421c199488e1e73c551860333703ddc24e" exitCode=0 Feb 14 05:42:07 crc kubenswrapper[4867]: I0214 05:42:07.161356 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hnp9l" Feb 14 05:42:07 crc kubenswrapper[4867]: I0214 05:42:07.161389 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnp9l" event={"ID":"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70","Type":"ContainerDied","Data":"53c4a9ed5aa0cad33a1da047be151c421c199488e1e73c551860333703ddc24e"} Feb 14 05:42:07 crc kubenswrapper[4867]: I0214 05:42:07.161752 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hnp9l" event={"ID":"cfa44170-d9b0-46a8-a2bb-8c6fa355cf70","Type":"ContainerDied","Data":"03b63e6e338e97fd57df5fde6fda2a32cf024491536c59c6434b27784e69fdbf"} Feb 14 05:42:07 crc kubenswrapper[4867]: I0214 05:42:07.161778 4867 scope.go:117] "RemoveContainer" containerID="53c4a9ed5aa0cad33a1da047be151c421c199488e1e73c551860333703ddc24e" Feb 14 05:42:07 crc kubenswrapper[4867]: I0214 05:42:07.189910 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnp9l"] Feb 14 05:42:07 crc kubenswrapper[4867]: I0214 05:42:07.199277 4867 scope.go:117] "RemoveContainer" containerID="d3ee72f5184f22da9e9fbc4f5a4589f4a519e499fcc04c5522f39b89cd0b7684" Feb 14 05:42:07 crc kubenswrapper[4867]: I0214 05:42:07.204571 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hnp9l"] Feb 14 05:42:07 crc kubenswrapper[4867]: I0214 05:42:07.222135 4867 scope.go:117] "RemoveContainer" containerID="c714a6fab8fc461e14d0f2f11c7a7e01cce0430791be8faf274c90db20b0ebbc" Feb 14 05:42:07 crc kubenswrapper[4867]: I0214 05:42:07.277964 4867 scope.go:117] "RemoveContainer" containerID="53c4a9ed5aa0cad33a1da047be151c421c199488e1e73c551860333703ddc24e" Feb 14 05:42:07 crc kubenswrapper[4867]: E0214 05:42:07.278550 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53c4a9ed5aa0cad33a1da047be151c421c199488e1e73c551860333703ddc24e\": container with ID starting with 53c4a9ed5aa0cad33a1da047be151c421c199488e1e73c551860333703ddc24e not found: ID does not exist" containerID="53c4a9ed5aa0cad33a1da047be151c421c199488e1e73c551860333703ddc24e" Feb 14 05:42:07 crc kubenswrapper[4867]: I0214 05:42:07.278578 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53c4a9ed5aa0cad33a1da047be151c421c199488e1e73c551860333703ddc24e"} err="failed to get container status \"53c4a9ed5aa0cad33a1da047be151c421c199488e1e73c551860333703ddc24e\": rpc error: code = NotFound desc = could not find container \"53c4a9ed5aa0cad33a1da047be151c421c199488e1e73c551860333703ddc24e\": container with ID starting with 53c4a9ed5aa0cad33a1da047be151c421c199488e1e73c551860333703ddc24e not found: ID does not exist" Feb 14 05:42:07 crc kubenswrapper[4867]: I0214 05:42:07.278609 4867 scope.go:117] "RemoveContainer" containerID="d3ee72f5184f22da9e9fbc4f5a4589f4a519e499fcc04c5522f39b89cd0b7684" Feb 14 05:42:07 crc kubenswrapper[4867]: E0214 05:42:07.278978 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3ee72f5184f22da9e9fbc4f5a4589f4a519e499fcc04c5522f39b89cd0b7684\": container with ID starting with d3ee72f5184f22da9e9fbc4f5a4589f4a519e499fcc04c5522f39b89cd0b7684 not found: ID does not exist" containerID="d3ee72f5184f22da9e9fbc4f5a4589f4a519e499fcc04c5522f39b89cd0b7684" Feb 14 05:42:07 crc kubenswrapper[4867]: I0214 05:42:07.279003 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3ee72f5184f22da9e9fbc4f5a4589f4a519e499fcc04c5522f39b89cd0b7684"} err="failed to get container status \"d3ee72f5184f22da9e9fbc4f5a4589f4a519e499fcc04c5522f39b89cd0b7684\": rpc error: code = NotFound desc = could not find container \"d3ee72f5184f22da9e9fbc4f5a4589f4a519e499fcc04c5522f39b89cd0b7684\": container with ID starting with d3ee72f5184f22da9e9fbc4f5a4589f4a519e499fcc04c5522f39b89cd0b7684 not found: ID does not exist" Feb 14 05:42:07 crc kubenswrapper[4867]: I0214 05:42:07.279022 4867 scope.go:117] "RemoveContainer" containerID="c714a6fab8fc461e14d0f2f11c7a7e01cce0430791be8faf274c90db20b0ebbc" Feb 14 05:42:07 crc kubenswrapper[4867]: E0214 05:42:07.279400 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c714a6fab8fc461e14d0f2f11c7a7e01cce0430791be8faf274c90db20b0ebbc\": container with ID starting with c714a6fab8fc461e14d0f2f11c7a7e01cce0430791be8faf274c90db20b0ebbc not found: ID does not exist" containerID="c714a6fab8fc461e14d0f2f11c7a7e01cce0430791be8faf274c90db20b0ebbc" Feb 14 05:42:07 crc kubenswrapper[4867]: I0214 05:42:07.279434 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c714a6fab8fc461e14d0f2f11c7a7e01cce0430791be8faf274c90db20b0ebbc"} err="failed to get container status \"c714a6fab8fc461e14d0f2f11c7a7e01cce0430791be8faf274c90db20b0ebbc\": rpc error: code = NotFound desc = could not find container \"c714a6fab8fc461e14d0f2f11c7a7e01cce0430791be8faf274c90db20b0ebbc\": container with ID starting with c714a6fab8fc461e14d0f2f11c7a7e01cce0430791be8faf274c90db20b0ebbc not found: ID does not exist" Feb 14 05:42:09 crc kubenswrapper[4867]: I0214 05:42:09.012623 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfa44170-d9b0-46a8-a2bb-8c6fa355cf70" path="/var/lib/kubelet/pods/cfa44170-d9b0-46a8-a2bb-8c6fa355cf70/volumes" Feb 14 05:42:31 crc kubenswrapper[4867]: I0214 05:42:31.251042 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:42:31 crc kubenswrapper[4867]: I0214 05:42:31.251813 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:42:31 crc kubenswrapper[4867]: I0214 05:42:31.251880 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 05:42:31 crc kubenswrapper[4867]: I0214 05:42:31.252955 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f5d63b1271ea439ba7c2f7514281f50c704e327b66fe9d213dc7e443134b610b"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 05:42:31 crc kubenswrapper[4867]: I0214 05:42:31.253056 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://f5d63b1271ea439ba7c2f7514281f50c704e327b66fe9d213dc7e443134b610b" gracePeriod=600 Feb 14 05:42:31 crc kubenswrapper[4867]: I0214 05:42:31.441726 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="f5d63b1271ea439ba7c2f7514281f50c704e327b66fe9d213dc7e443134b610b" exitCode=0 Feb 14 05:42:31 crc kubenswrapper[4867]: I0214 05:42:31.441816 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"f5d63b1271ea439ba7c2f7514281f50c704e327b66fe9d213dc7e443134b610b"} Feb 14 05:42:31 crc kubenswrapper[4867]: I0214 05:42:31.442207 4867 scope.go:117] "RemoveContainer" containerID="5e73bb84ca12c5e0e2f84b8149632e8db299b151552bafe8248698ab62e5c36a" Feb 14 05:42:32 crc kubenswrapper[4867]: I0214 05:42:32.468623 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37"} Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.017101 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fwkj9"] Feb 14 05:42:37 crc kubenswrapper[4867]: E0214 05:42:37.018241 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfa44170-d9b0-46a8-a2bb-8c6fa355cf70" containerName="extract-content" Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.018256 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfa44170-d9b0-46a8-a2bb-8c6fa355cf70" containerName="extract-content" Feb 14 05:42:37 crc kubenswrapper[4867]: E0214 05:42:37.018303 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfa44170-d9b0-46a8-a2bb-8c6fa355cf70" containerName="registry-server" Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.018310 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfa44170-d9b0-46a8-a2bb-8c6fa355cf70" containerName="registry-server" Feb 14 05:42:37 crc kubenswrapper[4867]: E0214 05:42:37.018325 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfa44170-d9b0-46a8-a2bb-8c6fa355cf70" containerName="extract-utilities" Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.018332 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfa44170-d9b0-46a8-a2bb-8c6fa355cf70" containerName="extract-utilities" Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.018619 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfa44170-d9b0-46a8-a2bb-8c6fa355cf70" containerName="registry-server" Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.028114 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.035837 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fwkj9"] Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.125444 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gthwm\" (UniqueName: \"kubernetes.io/projected/159832eb-a78e-4fcd-bbb3-42445194727f-kube-api-access-gthwm\") pod \"redhat-operators-fwkj9\" (UID: \"159832eb-a78e-4fcd-bbb3-42445194727f\") " pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.125772 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/159832eb-a78e-4fcd-bbb3-42445194727f-catalog-content\") pod \"redhat-operators-fwkj9\" (UID: \"159832eb-a78e-4fcd-bbb3-42445194727f\") " pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.125828 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/159832eb-a78e-4fcd-bbb3-42445194727f-utilities\") pod \"redhat-operators-fwkj9\" (UID: \"159832eb-a78e-4fcd-bbb3-42445194727f\") " pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.228475 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/159832eb-a78e-4fcd-bbb3-42445194727f-catalog-content\") pod \"redhat-operators-fwkj9\" (UID: \"159832eb-a78e-4fcd-bbb3-42445194727f\") " pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.228553 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/159832eb-a78e-4fcd-bbb3-42445194727f-utilities\") pod \"redhat-operators-fwkj9\" (UID: \"159832eb-a78e-4fcd-bbb3-42445194727f\") " pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.228688 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gthwm\" (UniqueName: \"kubernetes.io/projected/159832eb-a78e-4fcd-bbb3-42445194727f-kube-api-access-gthwm\") pod \"redhat-operators-fwkj9\" (UID: \"159832eb-a78e-4fcd-bbb3-42445194727f\") " pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.229022 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/159832eb-a78e-4fcd-bbb3-42445194727f-catalog-content\") pod \"redhat-operators-fwkj9\" (UID: \"159832eb-a78e-4fcd-bbb3-42445194727f\") " pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.229725 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/159832eb-a78e-4fcd-bbb3-42445194727f-utilities\") pod \"redhat-operators-fwkj9\" (UID: \"159832eb-a78e-4fcd-bbb3-42445194727f\") " pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.381415 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gthwm\" (UniqueName: \"kubernetes.io/projected/159832eb-a78e-4fcd-bbb3-42445194727f-kube-api-access-gthwm\") pod \"redhat-operators-fwkj9\" (UID: \"159832eb-a78e-4fcd-bbb3-42445194727f\") " pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:42:37 crc kubenswrapper[4867]: I0214 05:42:37.658364 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:42:38 crc kubenswrapper[4867]: I0214 05:42:38.237442 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fwkj9"] Feb 14 05:42:38 crc kubenswrapper[4867]: I0214 05:42:38.552953 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwkj9" event={"ID":"159832eb-a78e-4fcd-bbb3-42445194727f","Type":"ContainerStarted","Data":"a4f892867677ed3b7f68271fe9e6b97b68c94c9eb4b88b4e24c805889f39d99b"} Feb 14 05:42:38 crc kubenswrapper[4867]: I0214 05:42:38.553023 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwkj9" event={"ID":"159832eb-a78e-4fcd-bbb3-42445194727f","Type":"ContainerStarted","Data":"70a3c8fb291523d49cc697cb4e2f3f0924e4d7a7ffe89d4339b8129115addb42"} Feb 14 05:42:39 crc kubenswrapper[4867]: I0214 05:42:39.566205 4867 generic.go:334] "Generic (PLEG): container finished" podID="159832eb-a78e-4fcd-bbb3-42445194727f" containerID="a4f892867677ed3b7f68271fe9e6b97b68c94c9eb4b88b4e24c805889f39d99b" exitCode=0 Feb 14 05:42:39 crc kubenswrapper[4867]: I0214 05:42:39.566285 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwkj9" event={"ID":"159832eb-a78e-4fcd-bbb3-42445194727f","Type":"ContainerDied","Data":"a4f892867677ed3b7f68271fe9e6b97b68c94c9eb4b88b4e24c805889f39d99b"} Feb 14 05:42:41 crc kubenswrapper[4867]: I0214 05:42:41.610315 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwkj9" event={"ID":"159832eb-a78e-4fcd-bbb3-42445194727f","Type":"ContainerStarted","Data":"f340285247d57ce26dc5d1b1f4bfd2fffd160b22fc2966043a6cbc8ce2a85141"} Feb 14 05:42:55 crc kubenswrapper[4867]: I0214 05:42:55.776846 4867 generic.go:334] "Generic (PLEG): container finished" podID="159832eb-a78e-4fcd-bbb3-42445194727f" containerID="f340285247d57ce26dc5d1b1f4bfd2fffd160b22fc2966043a6cbc8ce2a85141" exitCode=0 Feb 14 05:42:55 crc kubenswrapper[4867]: I0214 05:42:55.776968 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwkj9" event={"ID":"159832eb-a78e-4fcd-bbb3-42445194727f","Type":"ContainerDied","Data":"f340285247d57ce26dc5d1b1f4bfd2fffd160b22fc2966043a6cbc8ce2a85141"} Feb 14 05:42:56 crc kubenswrapper[4867]: I0214 05:42:56.792430 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwkj9" event={"ID":"159832eb-a78e-4fcd-bbb3-42445194727f","Type":"ContainerStarted","Data":"3ca6cb09cd430f5a3defeae78e1e443d4c1ec2d8364fedad7b075708227be641"} Feb 14 05:42:56 crc kubenswrapper[4867]: I0214 05:42:56.817216 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fwkj9" podStartSLOduration=4.173064365 podStartE2EDuration="20.817193924s" podCreationTimestamp="2026-02-14 05:42:36 +0000 UTC" firstStartedPulling="2026-02-14 05:42:39.570052021 +0000 UTC m=+5591.650989335" lastFinishedPulling="2026-02-14 05:42:56.21418158 +0000 UTC m=+5608.295118894" observedRunningTime="2026-02-14 05:42:56.811041353 +0000 UTC m=+5608.891978667" watchObservedRunningTime="2026-02-14 05:42:56.817193924 +0000 UTC m=+5608.898131238" Feb 14 05:42:57 crc kubenswrapper[4867]: I0214 05:42:57.658966 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:42:57 crc kubenswrapper[4867]: I0214 05:42:57.659718 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:42:58 crc kubenswrapper[4867]: I0214 05:42:58.715036 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fwkj9" podUID="159832eb-a78e-4fcd-bbb3-42445194727f" containerName="registry-server" probeResult="failure" output=< Feb 14 05:42:58 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:42:58 crc kubenswrapper[4867]: > Feb 14 05:43:08 crc kubenswrapper[4867]: I0214 05:43:08.708519 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fwkj9" podUID="159832eb-a78e-4fcd-bbb3-42445194727f" containerName="registry-server" probeResult="failure" output=< Feb 14 05:43:08 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:43:08 crc kubenswrapper[4867]: > Feb 14 05:43:18 crc kubenswrapper[4867]: I0214 05:43:18.714356 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fwkj9" podUID="159832eb-a78e-4fcd-bbb3-42445194727f" containerName="registry-server" probeResult="failure" output=< Feb 14 05:43:18 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:43:18 crc kubenswrapper[4867]: > Feb 14 05:43:28 crc kubenswrapper[4867]: I0214 05:43:28.709940 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fwkj9" podUID="159832eb-a78e-4fcd-bbb3-42445194727f" containerName="registry-server" probeResult="failure" output=< Feb 14 05:43:28 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:43:28 crc kubenswrapper[4867]: > Feb 14 05:43:38 crc kubenswrapper[4867]: I0214 05:43:38.719138 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fwkj9" podUID="159832eb-a78e-4fcd-bbb3-42445194727f" containerName="registry-server" probeResult="failure" output=< Feb 14 05:43:38 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:43:38 crc kubenswrapper[4867]: > Feb 14 05:43:48 crc kubenswrapper[4867]: I0214 05:43:48.316134 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:43:48 crc kubenswrapper[4867]: I0214 05:43:48.380627 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:43:48 crc kubenswrapper[4867]: I0214 05:43:48.572950 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fwkj9"] Feb 14 05:43:49 crc kubenswrapper[4867]: I0214 05:43:49.446131 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fwkj9" podUID="159832eb-a78e-4fcd-bbb3-42445194727f" containerName="registry-server" containerID="cri-o://3ca6cb09cd430f5a3defeae78e1e443d4c1ec2d8364fedad7b075708227be641" gracePeriod=2 Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.220851 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.251143 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gthwm\" (UniqueName: \"kubernetes.io/projected/159832eb-a78e-4fcd-bbb3-42445194727f-kube-api-access-gthwm\") pod \"159832eb-a78e-4fcd-bbb3-42445194727f\" (UID: \"159832eb-a78e-4fcd-bbb3-42445194727f\") " Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.251256 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/159832eb-a78e-4fcd-bbb3-42445194727f-catalog-content\") pod \"159832eb-a78e-4fcd-bbb3-42445194727f\" (UID: \"159832eb-a78e-4fcd-bbb3-42445194727f\") " Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.251371 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/159832eb-a78e-4fcd-bbb3-42445194727f-utilities\") pod \"159832eb-a78e-4fcd-bbb3-42445194727f\" (UID: \"159832eb-a78e-4fcd-bbb3-42445194727f\") " Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.252873 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/159832eb-a78e-4fcd-bbb3-42445194727f-utilities" (OuterVolumeSpecName: "utilities") pod "159832eb-a78e-4fcd-bbb3-42445194727f" (UID: "159832eb-a78e-4fcd-bbb3-42445194727f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.265551 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/159832eb-a78e-4fcd-bbb3-42445194727f-kube-api-access-gthwm" (OuterVolumeSpecName: "kube-api-access-gthwm") pod "159832eb-a78e-4fcd-bbb3-42445194727f" (UID: "159832eb-a78e-4fcd-bbb3-42445194727f"). InnerVolumeSpecName "kube-api-access-gthwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.354808 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/159832eb-a78e-4fcd-bbb3-42445194727f-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.355011 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gthwm\" (UniqueName: \"kubernetes.io/projected/159832eb-a78e-4fcd-bbb3-42445194727f-kube-api-access-gthwm\") on node \"crc\" DevicePath \"\"" Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.404850 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/159832eb-a78e-4fcd-bbb3-42445194727f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "159832eb-a78e-4fcd-bbb3-42445194727f" (UID: "159832eb-a78e-4fcd-bbb3-42445194727f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.457199 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/159832eb-a78e-4fcd-bbb3-42445194727f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.460684 4867 generic.go:334] "Generic (PLEG): container finished" podID="159832eb-a78e-4fcd-bbb3-42445194727f" containerID="3ca6cb09cd430f5a3defeae78e1e443d4c1ec2d8364fedad7b075708227be641" exitCode=0 Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.460726 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwkj9" event={"ID":"159832eb-a78e-4fcd-bbb3-42445194727f","Type":"ContainerDied","Data":"3ca6cb09cd430f5a3defeae78e1e443d4c1ec2d8364fedad7b075708227be641"} Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.460741 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fwkj9" Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.460767 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fwkj9" event={"ID":"159832eb-a78e-4fcd-bbb3-42445194727f","Type":"ContainerDied","Data":"70a3c8fb291523d49cc697cb4e2f3f0924e4d7a7ffe89d4339b8129115addb42"} Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.460792 4867 scope.go:117] "RemoveContainer" containerID="3ca6cb09cd430f5a3defeae78e1e443d4c1ec2d8364fedad7b075708227be641" Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.504352 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fwkj9"] Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.508838 4867 scope.go:117] "RemoveContainer" containerID="f340285247d57ce26dc5d1b1f4bfd2fffd160b22fc2966043a6cbc8ce2a85141" Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.515328 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fwkj9"] Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.536266 4867 scope.go:117] "RemoveContainer" containerID="a4f892867677ed3b7f68271fe9e6b97b68c94c9eb4b88b4e24c805889f39d99b" Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.605899 4867 scope.go:117] "RemoveContainer" containerID="3ca6cb09cd430f5a3defeae78e1e443d4c1ec2d8364fedad7b075708227be641" Feb 14 05:43:50 crc kubenswrapper[4867]: E0214 05:43:50.606749 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ca6cb09cd430f5a3defeae78e1e443d4c1ec2d8364fedad7b075708227be641\": container with ID starting with 3ca6cb09cd430f5a3defeae78e1e443d4c1ec2d8364fedad7b075708227be641 not found: ID does not exist" containerID="3ca6cb09cd430f5a3defeae78e1e443d4c1ec2d8364fedad7b075708227be641" Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.606800 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ca6cb09cd430f5a3defeae78e1e443d4c1ec2d8364fedad7b075708227be641"} err="failed to get container status \"3ca6cb09cd430f5a3defeae78e1e443d4c1ec2d8364fedad7b075708227be641\": rpc error: code = NotFound desc = could not find container \"3ca6cb09cd430f5a3defeae78e1e443d4c1ec2d8364fedad7b075708227be641\": container with ID starting with 3ca6cb09cd430f5a3defeae78e1e443d4c1ec2d8364fedad7b075708227be641 not found: ID does not exist" Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.606833 4867 scope.go:117] "RemoveContainer" containerID="f340285247d57ce26dc5d1b1f4bfd2fffd160b22fc2966043a6cbc8ce2a85141" Feb 14 05:43:50 crc kubenswrapper[4867]: E0214 05:43:50.607288 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f340285247d57ce26dc5d1b1f4bfd2fffd160b22fc2966043a6cbc8ce2a85141\": container with ID starting with f340285247d57ce26dc5d1b1f4bfd2fffd160b22fc2966043a6cbc8ce2a85141 not found: ID does not exist" containerID="f340285247d57ce26dc5d1b1f4bfd2fffd160b22fc2966043a6cbc8ce2a85141" Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.607329 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f340285247d57ce26dc5d1b1f4bfd2fffd160b22fc2966043a6cbc8ce2a85141"} err="failed to get container status \"f340285247d57ce26dc5d1b1f4bfd2fffd160b22fc2966043a6cbc8ce2a85141\": rpc error: code = NotFound desc = could not find container \"f340285247d57ce26dc5d1b1f4bfd2fffd160b22fc2966043a6cbc8ce2a85141\": container with ID starting with f340285247d57ce26dc5d1b1f4bfd2fffd160b22fc2966043a6cbc8ce2a85141 not found: ID does not exist" Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.607361 4867 scope.go:117] "RemoveContainer" containerID="a4f892867677ed3b7f68271fe9e6b97b68c94c9eb4b88b4e24c805889f39d99b" Feb 14 05:43:50 crc kubenswrapper[4867]: E0214 05:43:50.607699 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4f892867677ed3b7f68271fe9e6b97b68c94c9eb4b88b4e24c805889f39d99b\": container with ID starting with a4f892867677ed3b7f68271fe9e6b97b68c94c9eb4b88b4e24c805889f39d99b not found: ID does not exist" containerID="a4f892867677ed3b7f68271fe9e6b97b68c94c9eb4b88b4e24c805889f39d99b" Feb 14 05:43:50 crc kubenswrapper[4867]: I0214 05:43:50.607724 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4f892867677ed3b7f68271fe9e6b97b68c94c9eb4b88b4e24c805889f39d99b"} err="failed to get container status \"a4f892867677ed3b7f68271fe9e6b97b68c94c9eb4b88b4e24c805889f39d99b\": rpc error: code = NotFound desc = could not find container \"a4f892867677ed3b7f68271fe9e6b97b68c94c9eb4b88b4e24c805889f39d99b\": container with ID starting with a4f892867677ed3b7f68271fe9e6b97b68c94c9eb4b88b4e24c805889f39d99b not found: ID does not exist" Feb 14 05:43:51 crc kubenswrapper[4867]: I0214 05:43:51.020163 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="159832eb-a78e-4fcd-bbb3-42445194727f" path="/var/lib/kubelet/pods/159832eb-a78e-4fcd-bbb3-42445194727f/volumes" Feb 14 05:44:31 crc kubenswrapper[4867]: I0214 05:44:31.251653 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:44:31 crc kubenswrapper[4867]: I0214 05:44:31.252298 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.510396 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj"] Feb 14 05:45:00 crc kubenswrapper[4867]: E0214 05:45:00.512676 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="159832eb-a78e-4fcd-bbb3-42445194727f" containerName="extract-content" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.512711 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="159832eb-a78e-4fcd-bbb3-42445194727f" containerName="extract-content" Feb 14 05:45:00 crc kubenswrapper[4867]: E0214 05:45:00.512744 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="159832eb-a78e-4fcd-bbb3-42445194727f" containerName="extract-utilities" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.512754 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="159832eb-a78e-4fcd-bbb3-42445194727f" containerName="extract-utilities" Feb 14 05:45:00 crc kubenswrapper[4867]: E0214 05:45:00.512782 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="159832eb-a78e-4fcd-bbb3-42445194727f" containerName="registry-server" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.512790 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="159832eb-a78e-4fcd-bbb3-42445194727f" containerName="registry-server" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.513203 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="159832eb-a78e-4fcd-bbb3-42445194727f" containerName="registry-server" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.514739 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.534066 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj"] Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.560456 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.560550 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.676753 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-config-volume\") pod \"collect-profiles-29517465-7qzlj\" (UID: \"ffaf4c01-f071-4d1a-9bb1-3711e9938e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.676892 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxzch\" (UniqueName: \"kubernetes.io/projected/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-kube-api-access-pxzch\") pod \"collect-profiles-29517465-7qzlj\" (UID: \"ffaf4c01-f071-4d1a-9bb1-3711e9938e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.677125 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-secret-volume\") pod \"collect-profiles-29517465-7qzlj\" (UID: \"ffaf4c01-f071-4d1a-9bb1-3711e9938e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.779430 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxzch\" (UniqueName: \"kubernetes.io/projected/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-kube-api-access-pxzch\") pod \"collect-profiles-29517465-7qzlj\" (UID: \"ffaf4c01-f071-4d1a-9bb1-3711e9938e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.779638 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-secret-volume\") pod \"collect-profiles-29517465-7qzlj\" (UID: \"ffaf4c01-f071-4d1a-9bb1-3711e9938e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.779722 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-config-volume\") pod \"collect-profiles-29517465-7qzlj\" (UID: \"ffaf4c01-f071-4d1a-9bb1-3711e9938e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.781219 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-config-volume\") pod \"collect-profiles-29517465-7qzlj\" (UID: \"ffaf4c01-f071-4d1a-9bb1-3711e9938e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.786771 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-secret-volume\") pod \"collect-profiles-29517465-7qzlj\" (UID: \"ffaf4c01-f071-4d1a-9bb1-3711e9938e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.798396 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxzch\" (UniqueName: \"kubernetes.io/projected/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-kube-api-access-pxzch\") pod \"collect-profiles-29517465-7qzlj\" (UID: \"ffaf4c01-f071-4d1a-9bb1-3711e9938e44\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" Feb 14 05:45:00 crc kubenswrapper[4867]: I0214 05:45:00.851812 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" Feb 14 05:45:01 crc kubenswrapper[4867]: I0214 05:45:01.251038 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:45:01 crc kubenswrapper[4867]: I0214 05:45:01.251431 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:45:01 crc kubenswrapper[4867]: I0214 05:45:01.588146 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj"] Feb 14 05:45:01 crc kubenswrapper[4867]: I0214 05:45:01.612909 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" event={"ID":"ffaf4c01-f071-4d1a-9bb1-3711e9938e44","Type":"ContainerStarted","Data":"04989d46bc85b246ba9abff5063cc062395b2c5e897fafd11e59b6f35637d5c7"} Feb 14 05:45:02 crc kubenswrapper[4867]: I0214 05:45:02.634805 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" event={"ID":"ffaf4c01-f071-4d1a-9bb1-3711e9938e44","Type":"ContainerStarted","Data":"b26df2e0ba8ccf7ec64150d93bdd34ff2089160925b8351fda1257f3e4a295e9"} Feb 14 05:45:02 crc kubenswrapper[4867]: I0214 05:45:02.654873 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" podStartSLOduration=2.654842918 podStartE2EDuration="2.654842918s" podCreationTimestamp="2026-02-14 05:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 05:45:02.652773463 +0000 UTC m=+5734.733710787" watchObservedRunningTime="2026-02-14 05:45:02.654842918 +0000 UTC m=+5734.735780252" Feb 14 05:45:04 crc kubenswrapper[4867]: I0214 05:45:04.658824 4867 generic.go:334] "Generic (PLEG): container finished" podID="ffaf4c01-f071-4d1a-9bb1-3711e9938e44" containerID="b26df2e0ba8ccf7ec64150d93bdd34ff2089160925b8351fda1257f3e4a295e9" exitCode=0 Feb 14 05:45:04 crc kubenswrapper[4867]: I0214 05:45:04.658882 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" event={"ID":"ffaf4c01-f071-4d1a-9bb1-3711e9938e44","Type":"ContainerDied","Data":"b26df2e0ba8ccf7ec64150d93bdd34ff2089160925b8351fda1257f3e4a295e9"} Feb 14 05:45:06 crc kubenswrapper[4867]: I0214 05:45:06.201131 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" Feb 14 05:45:06 crc kubenswrapper[4867]: I0214 05:45:06.238780 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-secret-volume\") pod \"ffaf4c01-f071-4d1a-9bb1-3711e9938e44\" (UID: \"ffaf4c01-f071-4d1a-9bb1-3711e9938e44\") " Feb 14 05:45:06 crc kubenswrapper[4867]: I0214 05:45:06.238839 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxzch\" (UniqueName: \"kubernetes.io/projected/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-kube-api-access-pxzch\") pod \"ffaf4c01-f071-4d1a-9bb1-3711e9938e44\" (UID: \"ffaf4c01-f071-4d1a-9bb1-3711e9938e44\") " Feb 14 05:45:06 crc kubenswrapper[4867]: I0214 05:45:06.238959 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-config-volume\") pod \"ffaf4c01-f071-4d1a-9bb1-3711e9938e44\" (UID: \"ffaf4c01-f071-4d1a-9bb1-3711e9938e44\") " Feb 14 05:45:06 crc kubenswrapper[4867]: I0214 05:45:06.240295 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-config-volume" (OuterVolumeSpecName: "config-volume") pod "ffaf4c01-f071-4d1a-9bb1-3711e9938e44" (UID: "ffaf4c01-f071-4d1a-9bb1-3711e9938e44"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 05:45:06 crc kubenswrapper[4867]: I0214 05:45:06.248635 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ffaf4c01-f071-4d1a-9bb1-3711e9938e44" (UID: "ffaf4c01-f071-4d1a-9bb1-3711e9938e44"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:45:06 crc kubenswrapper[4867]: I0214 05:45:06.249211 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-kube-api-access-pxzch" (OuterVolumeSpecName: "kube-api-access-pxzch") pod "ffaf4c01-f071-4d1a-9bb1-3711e9938e44" (UID: "ffaf4c01-f071-4d1a-9bb1-3711e9938e44"). InnerVolumeSpecName "kube-api-access-pxzch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:45:06 crc kubenswrapper[4867]: I0214 05:45:06.341526 4867 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 05:45:06 crc kubenswrapper[4867]: I0214 05:45:06.341555 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxzch\" (UniqueName: \"kubernetes.io/projected/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-kube-api-access-pxzch\") on node \"crc\" DevicePath \"\"" Feb 14 05:45:06 crc kubenswrapper[4867]: I0214 05:45:06.341564 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffaf4c01-f071-4d1a-9bb1-3711e9938e44-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 05:45:06 crc kubenswrapper[4867]: I0214 05:45:06.691668 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" event={"ID":"ffaf4c01-f071-4d1a-9bb1-3711e9938e44","Type":"ContainerDied","Data":"04989d46bc85b246ba9abff5063cc062395b2c5e897fafd11e59b6f35637d5c7"} Feb 14 05:45:06 crc kubenswrapper[4867]: I0214 05:45:06.692053 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04989d46bc85b246ba9abff5063cc062395b2c5e897fafd11e59b6f35637d5c7" Feb 14 05:45:06 crc kubenswrapper[4867]: I0214 05:45:06.692162 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517465-7qzlj" Feb 14 05:45:06 crc kubenswrapper[4867]: I0214 05:45:06.813245 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd"] Feb 14 05:45:06 crc kubenswrapper[4867]: I0214 05:45:06.823369 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517420-spkbd"] Feb 14 05:45:07 crc kubenswrapper[4867]: I0214 05:45:07.036093 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f3d9933-ea61-47f2-a857-edd1af2baf67" path="/var/lib/kubelet/pods/9f3d9933-ea61-47f2-a857-edd1af2baf67/volumes" Feb 14 05:45:18 crc kubenswrapper[4867]: I0214 05:45:18.747979 4867 scope.go:117] "RemoveContainer" containerID="7e47076001317bcb38834fe5f61417f02ae8109c8832987a242d29c2b0b144fa" Feb 14 05:45:31 crc kubenswrapper[4867]: I0214 05:45:31.251028 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:45:31 crc kubenswrapper[4867]: I0214 05:45:31.251847 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:45:31 crc kubenswrapper[4867]: I0214 05:45:31.251919 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 05:45:31 crc kubenswrapper[4867]: I0214 05:45:31.253118 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 05:45:31 crc kubenswrapper[4867]: I0214 05:45:31.253188 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" gracePeriod=600 Feb 14 05:45:31 crc kubenswrapper[4867]: E0214 05:45:31.381583 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:45:31 crc kubenswrapper[4867]: I0214 05:45:31.983079 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37"} Feb 14 05:45:31 crc kubenswrapper[4867]: I0214 05:45:31.983168 4867 scope.go:117] "RemoveContainer" containerID="f5d63b1271ea439ba7c2f7514281f50c704e327b66fe9d213dc7e443134b610b" Feb 14 05:45:31 crc kubenswrapper[4867]: I0214 05:45:31.983019 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" exitCode=0 Feb 14 05:45:31 crc kubenswrapper[4867]: I0214 05:45:31.984061 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:45:31 crc kubenswrapper[4867]: E0214 05:45:31.984486 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:45:46 crc kubenswrapper[4867]: I0214 05:45:46.998098 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:45:46 crc kubenswrapper[4867]: E0214 05:45:46.999927 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:46:00 crc kubenswrapper[4867]: I0214 05:46:00.997885 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:46:01 crc kubenswrapper[4867]: E0214 05:46:00.998823 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:46:12 crc kubenswrapper[4867]: I0214 05:46:12.998367 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:46:13 crc kubenswrapper[4867]: E0214 05:46:12.999622 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:46:23 crc kubenswrapper[4867]: I0214 05:46:23.997759 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:46:23 crc kubenswrapper[4867]: E0214 05:46:23.999657 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:46:36 crc kubenswrapper[4867]: I0214 05:46:36.998685 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:46:37 crc kubenswrapper[4867]: E0214 05:46:37.000039 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:46:48 crc kubenswrapper[4867]: I0214 05:46:47.999602 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:46:48 crc kubenswrapper[4867]: E0214 05:46:48.003455 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:47:01 crc kubenswrapper[4867]: I0214 05:47:01.998783 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:47:02 crc kubenswrapper[4867]: E0214 05:47:01.999892 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:47:14 crc kubenswrapper[4867]: I0214 05:47:14.997867 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:47:14 crc kubenswrapper[4867]: E0214 05:47:14.998817 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:47:18 crc kubenswrapper[4867]: I0214 05:47:18.897115 4867 scope.go:117] "RemoveContainer" containerID="cbe326a8e5634578b70f7f6afe4763f8fc03fbfab3802a9533507439c097bf40" Feb 14 05:47:18 crc kubenswrapper[4867]: I0214 05:47:18.922732 4867 scope.go:117] "RemoveContainer" containerID="196ca742dcc703f46deb1d50ebb9f9afbcb2cb52b7aa66003ca89e4afaf13dc4" Feb 14 05:47:19 crc kubenswrapper[4867]: I0214 05:47:19.001976 4867 scope.go:117] "RemoveContainer" containerID="cc44a1a3222d6deb16349071be26b927d02318057d20a59ca7cbee80422066fa" Feb 14 05:47:29 crc kubenswrapper[4867]: I0214 05:47:29.997338 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:47:29 crc kubenswrapper[4867]: E0214 05:47:29.998850 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:47:44 crc kubenswrapper[4867]: I0214 05:47:44.997891 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:47:44 crc kubenswrapper[4867]: E0214 05:47:44.998789 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:47:58 crc kubenswrapper[4867]: I0214 05:47:57.999566 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:47:58 crc kubenswrapper[4867]: E0214 05:47:58.001344 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:48:13 crc kubenswrapper[4867]: I0214 05:48:12.998911 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:48:13 crc kubenswrapper[4867]: E0214 05:48:12.999986 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:48:23 crc kubenswrapper[4867]: I0214 05:48:23.998346 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:48:24 crc kubenswrapper[4867]: E0214 05:48:23.999287 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:48:35 crc kubenswrapper[4867]: I0214 05:48:35.997293 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:48:35 crc kubenswrapper[4867]: E0214 05:48:35.998056 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:48:49 crc kubenswrapper[4867]: I0214 05:48:49.997862 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:48:49 crc kubenswrapper[4867]: E0214 05:48:49.998705 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:49:03 crc kubenswrapper[4867]: I0214 05:49:02.998284 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:49:03 crc kubenswrapper[4867]: E0214 05:49:03.001459 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:49:16 crc kubenswrapper[4867]: I0214 05:49:16.998850 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:49:17 crc kubenswrapper[4867]: E0214 05:49:17.000125 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:49:29 crc kubenswrapper[4867]: I0214 05:49:29.007908 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:49:29 crc kubenswrapper[4867]: E0214 05:49:29.010655 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:49:42 crc kubenswrapper[4867]: I0214 05:49:42.997389 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:49:42 crc kubenswrapper[4867]: E0214 05:49:42.998390 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:49:50 crc kubenswrapper[4867]: I0214 05:49:50.919362 4867 generic.go:334] "Generic (PLEG): container finished" podID="a161c594-8af3-458f-911a-bbf51e7bfcdd" containerID="b1742179cf0672940dcd64c514227d7fd46e83cfc6502a0b57ebf7e4bf13678c" exitCode=1 Feb 14 05:49:50 crc kubenswrapper[4867]: I0214 05:49:50.919457 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a161c594-8af3-458f-911a-bbf51e7bfcdd","Type":"ContainerDied","Data":"b1742179cf0672940dcd64c514227d7fd46e83cfc6502a0b57ebf7e4bf13678c"} Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.410624 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.475341 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-openstack-config-secret\") pod \"a161c594-8af3-458f-911a-bbf51e7bfcdd\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.475415 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh78z\" (UniqueName: \"kubernetes.io/projected/a161c594-8af3-458f-911a-bbf51e7bfcdd-kube-api-access-vh78z\") pod \"a161c594-8af3-458f-911a-bbf51e7bfcdd\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.475464 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a161c594-8af3-458f-911a-bbf51e7bfcdd-test-operator-ephemeral-workdir\") pod \"a161c594-8af3-458f-911a-bbf51e7bfcdd\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.475534 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a161c594-8af3-458f-911a-bbf51e7bfcdd-config-data\") pod \"a161c594-8af3-458f-911a-bbf51e7bfcdd\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.475682 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a161c594-8af3-458f-911a-bbf51e7bfcdd-openstack-config\") pod \"a161c594-8af3-458f-911a-bbf51e7bfcdd\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.475772 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-ssh-key\") pod \"a161c594-8af3-458f-911a-bbf51e7bfcdd\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.475818 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-ca-certs\") pod \"a161c594-8af3-458f-911a-bbf51e7bfcdd\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.475872 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"a161c594-8af3-458f-911a-bbf51e7bfcdd\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.475913 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a161c594-8af3-458f-911a-bbf51e7bfcdd-test-operator-ephemeral-temporary\") pod \"a161c594-8af3-458f-911a-bbf51e7bfcdd\" (UID: \"a161c594-8af3-458f-911a-bbf51e7bfcdd\") " Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.477653 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a161c594-8af3-458f-911a-bbf51e7bfcdd-config-data" (OuterVolumeSpecName: "config-data") pod "a161c594-8af3-458f-911a-bbf51e7bfcdd" (UID: "a161c594-8af3-458f-911a-bbf51e7bfcdd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.478287 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a161c594-8af3-458f-911a-bbf51e7bfcdd-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "a161c594-8af3-458f-911a-bbf51e7bfcdd" (UID: "a161c594-8af3-458f-911a-bbf51e7bfcdd"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.486616 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "test-operator-logs") pod "a161c594-8af3-458f-911a-bbf51e7bfcdd" (UID: "a161c594-8af3-458f-911a-bbf51e7bfcdd"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.489843 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a161c594-8af3-458f-911a-bbf51e7bfcdd-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "a161c594-8af3-458f-911a-bbf51e7bfcdd" (UID: "a161c594-8af3-458f-911a-bbf51e7bfcdd"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.491052 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a161c594-8af3-458f-911a-bbf51e7bfcdd-kube-api-access-vh78z" (OuterVolumeSpecName: "kube-api-access-vh78z") pod "a161c594-8af3-458f-911a-bbf51e7bfcdd" (UID: "a161c594-8af3-458f-911a-bbf51e7bfcdd"). InnerVolumeSpecName "kube-api-access-vh78z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.535764 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "a161c594-8af3-458f-911a-bbf51e7bfcdd" (UID: "a161c594-8af3-458f-911a-bbf51e7bfcdd"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.550244 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a161c594-8af3-458f-911a-bbf51e7bfcdd" (UID: "a161c594-8af3-458f-911a-bbf51e7bfcdd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.555497 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "a161c594-8af3-458f-911a-bbf51e7bfcdd" (UID: "a161c594-8af3-458f-911a-bbf51e7bfcdd"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.564773 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a161c594-8af3-458f-911a-bbf51e7bfcdd-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "a161c594-8af3-458f-911a-bbf51e7bfcdd" (UID: "a161c594-8af3-458f-911a-bbf51e7bfcdd"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.583915 4867 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a161c594-8af3-458f-911a-bbf51e7bfcdd-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.583958 4867 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.583968 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh78z\" (UniqueName: \"kubernetes.io/projected/a161c594-8af3-458f-911a-bbf51e7bfcdd-kube-api-access-vh78z\") on node \"crc\" DevicePath \"\"" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.583979 4867 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a161c594-8af3-458f-911a-bbf51e7bfcdd-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.583991 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a161c594-8af3-458f-911a-bbf51e7bfcdd-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.584003 4867 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a161c594-8af3-458f-911a-bbf51e7bfcdd-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.584013 4867 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.584020 4867 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a161c594-8af3-458f-911a-bbf51e7bfcdd-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.585908 4867 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.619482 4867 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.687794 4867 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.947827 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a161c594-8af3-458f-911a-bbf51e7bfcdd","Type":"ContainerDied","Data":"69a1559021e3c0afa3311c13a382b071b919ecabc5729024c716838afe1c709a"} Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.948170 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69a1559021e3c0afa3311c13a382b071b919ecabc5729024c716838afe1c709a" Feb 14 05:49:52 crc kubenswrapper[4867]: I0214 05:49:52.947909 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 14 05:49:56 crc kubenswrapper[4867]: I0214 05:49:56.997426 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:49:56 crc kubenswrapper[4867]: E0214 05:49:56.998047 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:49:59 crc kubenswrapper[4867]: I0214 05:49:59.230123 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 14 05:49:59 crc kubenswrapper[4867]: E0214 05:49:59.231228 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffaf4c01-f071-4d1a-9bb1-3711e9938e44" containerName="collect-profiles" Feb 14 05:49:59 crc kubenswrapper[4867]: I0214 05:49:59.231242 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffaf4c01-f071-4d1a-9bb1-3711e9938e44" containerName="collect-profiles" Feb 14 05:49:59 crc kubenswrapper[4867]: E0214 05:49:59.231286 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a161c594-8af3-458f-911a-bbf51e7bfcdd" containerName="tempest-tests-tempest-tests-runner" Feb 14 05:49:59 crc kubenswrapper[4867]: I0214 05:49:59.231292 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="a161c594-8af3-458f-911a-bbf51e7bfcdd" containerName="tempest-tests-tempest-tests-runner" Feb 14 05:49:59 crc kubenswrapper[4867]: I0214 05:49:59.231532 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffaf4c01-f071-4d1a-9bb1-3711e9938e44" containerName="collect-profiles" Feb 14 05:49:59 crc kubenswrapper[4867]: I0214 05:49:59.231549 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="a161c594-8af3-458f-911a-bbf51e7bfcdd" containerName="tempest-tests-tempest-tests-runner" Feb 14 05:49:59 crc kubenswrapper[4867]: I0214 05:49:59.232454 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 05:49:59 crc kubenswrapper[4867]: I0214 05:49:59.234704 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-wxg74" Feb 14 05:49:59 crc kubenswrapper[4867]: I0214 05:49:59.256399 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 14 05:49:59 crc kubenswrapper[4867]: I0214 05:49:59.344663 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be58ab35-1c46-426e-87a1-9010a643ead5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 05:49:59 crc kubenswrapper[4867]: I0214 05:49:59.344758 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m64q\" (UniqueName: \"kubernetes.io/projected/be58ab35-1c46-426e-87a1-9010a643ead5-kube-api-access-2m64q\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be58ab35-1c46-426e-87a1-9010a643ead5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 05:49:59 crc kubenswrapper[4867]: I0214 05:49:59.446590 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be58ab35-1c46-426e-87a1-9010a643ead5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 05:49:59 crc kubenswrapper[4867]: I0214 05:49:59.446714 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m64q\" (UniqueName: \"kubernetes.io/projected/be58ab35-1c46-426e-87a1-9010a643ead5-kube-api-access-2m64q\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be58ab35-1c46-426e-87a1-9010a643ead5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 05:49:59 crc kubenswrapper[4867]: I0214 05:49:59.448605 4867 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be58ab35-1c46-426e-87a1-9010a643ead5\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 05:49:59 crc kubenswrapper[4867]: I0214 05:49:59.476349 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m64q\" (UniqueName: \"kubernetes.io/projected/be58ab35-1c46-426e-87a1-9010a643ead5-kube-api-access-2m64q\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be58ab35-1c46-426e-87a1-9010a643ead5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 05:49:59 crc kubenswrapper[4867]: I0214 05:49:59.508982 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"be58ab35-1c46-426e-87a1-9010a643ead5\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 05:49:59 crc kubenswrapper[4867]: I0214 05:49:59.572301 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 14 05:50:00 crc kubenswrapper[4867]: I0214 05:50:00.054292 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 14 05:50:00 crc kubenswrapper[4867]: I0214 05:50:00.064300 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 05:50:01 crc kubenswrapper[4867]: I0214 05:50:01.054613 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"be58ab35-1c46-426e-87a1-9010a643ead5","Type":"ContainerStarted","Data":"f17eb7fe48ca9d0696a2919b81b7780674d72a55c18fc53e5110e168118f3e53"} Feb 14 05:50:03 crc kubenswrapper[4867]: I0214 05:50:03.075378 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"be58ab35-1c46-426e-87a1-9010a643ead5","Type":"ContainerStarted","Data":"e89f42246f95386c41a8b48ca3284511cdaac889dfff3d346f0eeb99b832072d"} Feb 14 05:50:03 crc kubenswrapper[4867]: I0214 05:50:03.097847 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.7815386869999998 podStartE2EDuration="4.097829699s" podCreationTimestamp="2026-02-14 05:49:59 +0000 UTC" firstStartedPulling="2026-02-14 05:50:00.0640454 +0000 UTC m=+6032.144982714" lastFinishedPulling="2026-02-14 05:50:02.380336412 +0000 UTC m=+6034.461273726" observedRunningTime="2026-02-14 05:50:03.09097153 +0000 UTC m=+6035.171908844" watchObservedRunningTime="2026-02-14 05:50:03.097829699 +0000 UTC m=+6035.178767003" Feb 14 05:50:07 crc kubenswrapper[4867]: I0214 05:50:07.997333 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:50:07 crc kubenswrapper[4867]: E0214 05:50:07.999203 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:50:19 crc kubenswrapper[4867]: I0214 05:50:19.009840 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:50:19 crc kubenswrapper[4867]: E0214 05:50:19.010843 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:50:29 crc kubenswrapper[4867]: I0214 05:50:29.997723 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:50:29 crc kubenswrapper[4867]: E0214 05:50:29.998445 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:50:40 crc kubenswrapper[4867]: I0214 05:50:40.997998 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:50:41 crc kubenswrapper[4867]: I0214 05:50:41.502607 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"969e0cb4cefe8b8e5046ee62cca830ff3afc22fe72785a6b708c487b9ff93b5e"} Feb 14 05:50:52 crc kubenswrapper[4867]: I0214 05:50:52.528851 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rtzc7/must-gather-wmzns"] Feb 14 05:50:52 crc kubenswrapper[4867]: I0214 05:50:52.532160 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rtzc7/must-gather-wmzns" Feb 14 05:50:52 crc kubenswrapper[4867]: I0214 05:50:52.534269 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rtzc7"/"kube-root-ca.crt" Feb 14 05:50:52 crc kubenswrapper[4867]: I0214 05:50:52.535682 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rtzc7"/"openshift-service-ca.crt" Feb 14 05:50:52 crc kubenswrapper[4867]: I0214 05:50:52.560276 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rtzc7/must-gather-wmzns"] Feb 14 05:50:52 crc kubenswrapper[4867]: I0214 05:50:52.688286 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/89d6412f-a37d-4f30-8c3a-9514185847fc-must-gather-output\") pod \"must-gather-wmzns\" (UID: \"89d6412f-a37d-4f30-8c3a-9514185847fc\") " pod="openshift-must-gather-rtzc7/must-gather-wmzns" Feb 14 05:50:52 crc kubenswrapper[4867]: I0214 05:50:52.688338 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slmvv\" (UniqueName: \"kubernetes.io/projected/89d6412f-a37d-4f30-8c3a-9514185847fc-kube-api-access-slmvv\") pod \"must-gather-wmzns\" (UID: \"89d6412f-a37d-4f30-8c3a-9514185847fc\") " pod="openshift-must-gather-rtzc7/must-gather-wmzns" Feb 14 05:50:52 crc kubenswrapper[4867]: I0214 05:50:52.792567 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/89d6412f-a37d-4f30-8c3a-9514185847fc-must-gather-output\") pod \"must-gather-wmzns\" (UID: \"89d6412f-a37d-4f30-8c3a-9514185847fc\") " pod="openshift-must-gather-rtzc7/must-gather-wmzns" Feb 14 05:50:52 crc kubenswrapper[4867]: I0214 05:50:52.792845 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slmvv\" (UniqueName: \"kubernetes.io/projected/89d6412f-a37d-4f30-8c3a-9514185847fc-kube-api-access-slmvv\") pod \"must-gather-wmzns\" (UID: \"89d6412f-a37d-4f30-8c3a-9514185847fc\") " pod="openshift-must-gather-rtzc7/must-gather-wmzns" Feb 14 05:50:52 crc kubenswrapper[4867]: I0214 05:50:52.793113 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/89d6412f-a37d-4f30-8c3a-9514185847fc-must-gather-output\") pod \"must-gather-wmzns\" (UID: \"89d6412f-a37d-4f30-8c3a-9514185847fc\") " pod="openshift-must-gather-rtzc7/must-gather-wmzns" Feb 14 05:50:52 crc kubenswrapper[4867]: I0214 05:50:52.812660 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slmvv\" (UniqueName: \"kubernetes.io/projected/89d6412f-a37d-4f30-8c3a-9514185847fc-kube-api-access-slmvv\") pod \"must-gather-wmzns\" (UID: \"89d6412f-a37d-4f30-8c3a-9514185847fc\") " pod="openshift-must-gather-rtzc7/must-gather-wmzns" Feb 14 05:50:52 crc kubenswrapper[4867]: I0214 05:50:52.851682 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rtzc7/must-gather-wmzns" Feb 14 05:50:53 crc kubenswrapper[4867]: I0214 05:50:53.429669 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rtzc7/must-gather-wmzns"] Feb 14 05:50:53 crc kubenswrapper[4867]: I0214 05:50:53.624647 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rtzc7/must-gather-wmzns" event={"ID":"89d6412f-a37d-4f30-8c3a-9514185847fc","Type":"ContainerStarted","Data":"2d6a5a00012c52a2aac1e8dffdc748b022caf87a8674b148896c8bda016c8acb"} Feb 14 05:51:01 crc kubenswrapper[4867]: I0214 05:51:01.723697 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rtzc7/must-gather-wmzns" event={"ID":"89d6412f-a37d-4f30-8c3a-9514185847fc","Type":"ContainerStarted","Data":"177c95f4e7826d6d799901d70a180712f443165780432f255fcb63f96509fb1c"} Feb 14 05:51:02 crc kubenswrapper[4867]: I0214 05:51:02.744826 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rtzc7/must-gather-wmzns" event={"ID":"89d6412f-a37d-4f30-8c3a-9514185847fc","Type":"ContainerStarted","Data":"8bda962d52e435b73ab83aa35089685e683712a0b3acfa743e4df637f1d29a76"} Feb 14 05:51:02 crc kubenswrapper[4867]: I0214 05:51:02.770831 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rtzc7/must-gather-wmzns" podStartSLOduration=2.887364411 podStartE2EDuration="10.770811301s" podCreationTimestamp="2026-02-14 05:50:52 +0000 UTC" firstStartedPulling="2026-02-14 05:50:53.430273596 +0000 UTC m=+6085.511210920" lastFinishedPulling="2026-02-14 05:51:01.313720486 +0000 UTC m=+6093.394657810" observedRunningTime="2026-02-14 05:51:02.763993962 +0000 UTC m=+6094.844931306" watchObservedRunningTime="2026-02-14 05:51:02.770811301 +0000 UTC m=+6094.851748605" Feb 14 05:51:07 crc kubenswrapper[4867]: E0214 05:51:07.419544 4867 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.113:59912->38.102.83.113:33373: read tcp 38.102.83.113:59912->38.102.83.113:33373: read: connection reset by peer Feb 14 05:51:08 crc kubenswrapper[4867]: I0214 05:51:08.302553 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rtzc7/crc-debug-tz25z"] Feb 14 05:51:08 crc kubenswrapper[4867]: I0214 05:51:08.305161 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rtzc7/crc-debug-tz25z" Feb 14 05:51:08 crc kubenswrapper[4867]: I0214 05:51:08.311798 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-rtzc7"/"default-dockercfg-kt9b9" Feb 14 05:51:08 crc kubenswrapper[4867]: I0214 05:51:08.414797 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b8b6ff93-1581-48eb-b74d-f7c97cdb1918-host\") pod \"crc-debug-tz25z\" (UID: \"b8b6ff93-1581-48eb-b74d-f7c97cdb1918\") " pod="openshift-must-gather-rtzc7/crc-debug-tz25z" Feb 14 05:51:08 crc kubenswrapper[4867]: I0214 05:51:08.415238 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j95rn\" (UniqueName: \"kubernetes.io/projected/b8b6ff93-1581-48eb-b74d-f7c97cdb1918-kube-api-access-j95rn\") pod \"crc-debug-tz25z\" (UID: \"b8b6ff93-1581-48eb-b74d-f7c97cdb1918\") " pod="openshift-must-gather-rtzc7/crc-debug-tz25z" Feb 14 05:51:08 crc kubenswrapper[4867]: I0214 05:51:08.517793 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b8b6ff93-1581-48eb-b74d-f7c97cdb1918-host\") pod \"crc-debug-tz25z\" (UID: \"b8b6ff93-1581-48eb-b74d-f7c97cdb1918\") " pod="openshift-must-gather-rtzc7/crc-debug-tz25z" Feb 14 05:51:08 crc kubenswrapper[4867]: I0214 05:51:08.517953 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j95rn\" (UniqueName: \"kubernetes.io/projected/b8b6ff93-1581-48eb-b74d-f7c97cdb1918-kube-api-access-j95rn\") pod \"crc-debug-tz25z\" (UID: \"b8b6ff93-1581-48eb-b74d-f7c97cdb1918\") " pod="openshift-must-gather-rtzc7/crc-debug-tz25z" Feb 14 05:51:08 crc kubenswrapper[4867]: I0214 05:51:08.518820 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b8b6ff93-1581-48eb-b74d-f7c97cdb1918-host\") pod \"crc-debug-tz25z\" (UID: \"b8b6ff93-1581-48eb-b74d-f7c97cdb1918\") " pod="openshift-must-gather-rtzc7/crc-debug-tz25z" Feb 14 05:51:08 crc kubenswrapper[4867]: I0214 05:51:08.538295 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j95rn\" (UniqueName: \"kubernetes.io/projected/b8b6ff93-1581-48eb-b74d-f7c97cdb1918-kube-api-access-j95rn\") pod \"crc-debug-tz25z\" (UID: \"b8b6ff93-1581-48eb-b74d-f7c97cdb1918\") " pod="openshift-must-gather-rtzc7/crc-debug-tz25z" Feb 14 05:51:08 crc kubenswrapper[4867]: I0214 05:51:08.623877 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rtzc7/crc-debug-tz25z" Feb 14 05:51:08 crc kubenswrapper[4867]: I0214 05:51:08.825567 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rtzc7/crc-debug-tz25z" event={"ID":"b8b6ff93-1581-48eb-b74d-f7c97cdb1918","Type":"ContainerStarted","Data":"624245ddd3f542dcc22ae9c7894ed7fd3efca5ec2c05b8bd8c8b8eec3e915a96"} Feb 14 05:51:24 crc kubenswrapper[4867]: E0214 05:51:24.600917 4867 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Feb 14 05:51:24 crc kubenswrapper[4867]: E0214 05:51:24.605690 4867 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j95rn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-tz25z_openshift-must-gather-rtzc7(b8b6ff93-1581-48eb-b74d-f7c97cdb1918): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 14 05:51:24 crc kubenswrapper[4867]: E0214 05:51:24.607090 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-rtzc7/crc-debug-tz25z" podUID="b8b6ff93-1581-48eb-b74d-f7c97cdb1918" Feb 14 05:51:25 crc kubenswrapper[4867]: E0214 05:51:25.074312 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-rtzc7/crc-debug-tz25z" podUID="b8b6ff93-1581-48eb-b74d-f7c97cdb1918" Feb 14 05:51:39 crc kubenswrapper[4867]: I0214 05:51:39.949464 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gxd4x"] Feb 14 05:51:39 crc kubenswrapper[4867]: I0214 05:51:39.953196 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:51:39 crc kubenswrapper[4867]: I0214 05:51:39.963217 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gxd4x"] Feb 14 05:51:39 crc kubenswrapper[4867]: I0214 05:51:39.990957 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l5cv\" (UniqueName: \"kubernetes.io/projected/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-kube-api-access-7l5cv\") pod \"certified-operators-gxd4x\" (UID: \"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925\") " pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:51:39 crc kubenswrapper[4867]: I0214 05:51:39.991115 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-catalog-content\") pod \"certified-operators-gxd4x\" (UID: \"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925\") " pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:51:39 crc kubenswrapper[4867]: I0214 05:51:39.991314 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-utilities\") pod \"certified-operators-gxd4x\" (UID: \"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925\") " pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:51:40 crc kubenswrapper[4867]: I0214 05:51:40.093783 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l5cv\" (UniqueName: \"kubernetes.io/projected/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-kube-api-access-7l5cv\") pod \"certified-operators-gxd4x\" (UID: \"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925\") " pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:51:40 crc kubenswrapper[4867]: I0214 05:51:40.094316 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-catalog-content\") pod \"certified-operators-gxd4x\" (UID: \"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925\") " pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:51:40 crc kubenswrapper[4867]: I0214 05:51:40.094876 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-catalog-content\") pod \"certified-operators-gxd4x\" (UID: \"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925\") " pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:51:40 crc kubenswrapper[4867]: I0214 05:51:40.095126 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-utilities\") pod \"certified-operators-gxd4x\" (UID: \"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925\") " pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:51:40 crc kubenswrapper[4867]: I0214 05:51:40.096471 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-utilities\") pod \"certified-operators-gxd4x\" (UID: \"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925\") " pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:51:40 crc kubenswrapper[4867]: I0214 05:51:40.120394 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l5cv\" (UniqueName: \"kubernetes.io/projected/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-kube-api-access-7l5cv\") pod \"certified-operators-gxd4x\" (UID: \"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925\") " pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:51:40 crc kubenswrapper[4867]: I0214 05:51:40.274408 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:51:40 crc kubenswrapper[4867]: I0214 05:51:40.296695 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rtzc7/crc-debug-tz25z" event={"ID":"b8b6ff93-1581-48eb-b74d-f7c97cdb1918","Type":"ContainerStarted","Data":"fd380d7db84361518f8a7673c0c88c1dc8ce8c1cbbe679b0aafd4c0d3248660f"} Feb 14 05:51:40 crc kubenswrapper[4867]: I0214 05:51:40.322185 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rtzc7/crc-debug-tz25z" podStartSLOduration=1.569922638 podStartE2EDuration="32.322164308s" podCreationTimestamp="2026-02-14 05:51:08 +0000 UTC" firstStartedPulling="2026-02-14 05:51:08.684943755 +0000 UTC m=+6100.765881069" lastFinishedPulling="2026-02-14 05:51:39.437185415 +0000 UTC m=+6131.518122739" observedRunningTime="2026-02-14 05:51:40.314841076 +0000 UTC m=+6132.395778400" watchObservedRunningTime="2026-02-14 05:51:40.322164308 +0000 UTC m=+6132.403101622" Feb 14 05:51:41 crc kubenswrapper[4867]: I0214 05:51:41.713381 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gxd4x"] Feb 14 05:51:42 crc kubenswrapper[4867]: I0214 05:51:42.316099 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxd4x" event={"ID":"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925","Type":"ContainerStarted","Data":"9ee331d8f9be369631f10654c158e87afe7a9d548a81fbe230376595ebd85ecc"} Feb 14 05:51:43 crc kubenswrapper[4867]: I0214 05:51:43.325305 4867 generic.go:334] "Generic (PLEG): container finished" podID="90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" containerID="4f87e7e2ede33a2b7fa8751c3558057ce168debd3fad80bbc97dfee71d9403f6" exitCode=0 Feb 14 05:51:43 crc kubenswrapper[4867]: I0214 05:51:43.325349 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxd4x" event={"ID":"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925","Type":"ContainerDied","Data":"4f87e7e2ede33a2b7fa8751c3558057ce168debd3fad80bbc97dfee71d9403f6"} Feb 14 05:51:44 crc kubenswrapper[4867]: I0214 05:51:44.344316 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxd4x" event={"ID":"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925","Type":"ContainerStarted","Data":"217bb34d1c9ef98f68a83eeb0567200efcaef13a371554b797dc554328ba880f"} Feb 14 05:51:47 crc kubenswrapper[4867]: I0214 05:51:47.378018 4867 generic.go:334] "Generic (PLEG): container finished" podID="90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" containerID="217bb34d1c9ef98f68a83eeb0567200efcaef13a371554b797dc554328ba880f" exitCode=0 Feb 14 05:51:47 crc kubenswrapper[4867]: I0214 05:51:47.378083 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxd4x" event={"ID":"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925","Type":"ContainerDied","Data":"217bb34d1c9ef98f68a83eeb0567200efcaef13a371554b797dc554328ba880f"} Feb 14 05:51:50 crc kubenswrapper[4867]: I0214 05:51:50.759839 4867 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="b27199a8-11ac-4e59-90b8-b42387dd6dd2" containerName="galera" probeResult="failure" output="command timed out" Feb 14 05:51:50 crc kubenswrapper[4867]: I0214 05:51:50.762231 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="b27199a8-11ac-4e59-90b8-b42387dd6dd2" containerName="galera" probeResult="failure" output="command timed out" Feb 14 05:51:52 crc kubenswrapper[4867]: I0214 05:51:52.913315 4867 trace.go:236] Trace[1536607917]: "Calculate volume metrics of catalog-content for pod openshift-marketplace/redhat-operators-bvb8v" (14-Feb-2026 05:51:51.302) (total time: 1610ms): Feb 14 05:51:52 crc kubenswrapper[4867]: Trace[1536607917]: [1.610399058s] [1.610399058s] END Feb 14 05:51:56 crc kubenswrapper[4867]: I0214 05:51:56.485522 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxd4x" event={"ID":"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925","Type":"ContainerStarted","Data":"f3a664068966be5f0271feb5aa6cd4ab27234fbada6908923e0783a689fac623"} Feb 14 05:51:56 crc kubenswrapper[4867]: I0214 05:51:56.504756 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gxd4x" podStartSLOduration=5.661832485 podStartE2EDuration="17.504736756s" podCreationTimestamp="2026-02-14 05:51:39 +0000 UTC" firstStartedPulling="2026-02-14 05:51:43.327759818 +0000 UTC m=+6135.408697122" lastFinishedPulling="2026-02-14 05:51:55.170664079 +0000 UTC m=+6147.251601393" observedRunningTime="2026-02-14 05:51:56.50181516 +0000 UTC m=+6148.582752474" watchObservedRunningTime="2026-02-14 05:51:56.504736756 +0000 UTC m=+6148.585674080" Feb 14 05:52:00 crc kubenswrapper[4867]: I0214 05:52:00.274584 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:52:00 crc kubenswrapper[4867]: I0214 05:52:00.275982 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:52:01 crc kubenswrapper[4867]: I0214 05:52:01.326915 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-gxd4x" podUID="90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" containerName="registry-server" probeResult="failure" output=< Feb 14 05:52:01 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:52:01 crc kubenswrapper[4867]: > Feb 14 05:52:09 crc kubenswrapper[4867]: I0214 05:52:09.622765 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9mmhn"] Feb 14 05:52:09 crc kubenswrapper[4867]: I0214 05:52:09.638980 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:09 crc kubenswrapper[4867]: I0214 05:52:09.673278 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9mmhn"] Feb 14 05:52:09 crc kubenswrapper[4867]: I0214 05:52:09.777012 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-958nm\" (UniqueName: \"kubernetes.io/projected/03648482-256b-4fd0-94f3-f5dd889f5d49-kube-api-access-958nm\") pod \"community-operators-9mmhn\" (UID: \"03648482-256b-4fd0-94f3-f5dd889f5d49\") " pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:09 crc kubenswrapper[4867]: I0214 05:52:09.777281 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03648482-256b-4fd0-94f3-f5dd889f5d49-utilities\") pod \"community-operators-9mmhn\" (UID: \"03648482-256b-4fd0-94f3-f5dd889f5d49\") " pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:09 crc kubenswrapper[4867]: I0214 05:52:09.777377 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03648482-256b-4fd0-94f3-f5dd889f5d49-catalog-content\") pod \"community-operators-9mmhn\" (UID: \"03648482-256b-4fd0-94f3-f5dd889f5d49\") " pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:09 crc kubenswrapper[4867]: I0214 05:52:09.879591 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-958nm\" (UniqueName: \"kubernetes.io/projected/03648482-256b-4fd0-94f3-f5dd889f5d49-kube-api-access-958nm\") pod \"community-operators-9mmhn\" (UID: \"03648482-256b-4fd0-94f3-f5dd889f5d49\") " pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:09 crc kubenswrapper[4867]: I0214 05:52:09.880219 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03648482-256b-4fd0-94f3-f5dd889f5d49-utilities\") pod \"community-operators-9mmhn\" (UID: \"03648482-256b-4fd0-94f3-f5dd889f5d49\") " pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:09 crc kubenswrapper[4867]: I0214 05:52:09.880435 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03648482-256b-4fd0-94f3-f5dd889f5d49-catalog-content\") pod \"community-operators-9mmhn\" (UID: \"03648482-256b-4fd0-94f3-f5dd889f5d49\") " pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:09 crc kubenswrapper[4867]: I0214 05:52:09.881022 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03648482-256b-4fd0-94f3-f5dd889f5d49-utilities\") pod \"community-operators-9mmhn\" (UID: \"03648482-256b-4fd0-94f3-f5dd889f5d49\") " pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:09 crc kubenswrapper[4867]: I0214 05:52:09.881088 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03648482-256b-4fd0-94f3-f5dd889f5d49-catalog-content\") pod \"community-operators-9mmhn\" (UID: \"03648482-256b-4fd0-94f3-f5dd889f5d49\") " pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:09 crc kubenswrapper[4867]: I0214 05:52:09.899753 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-958nm\" (UniqueName: \"kubernetes.io/projected/03648482-256b-4fd0-94f3-f5dd889f5d49-kube-api-access-958nm\") pod \"community-operators-9mmhn\" (UID: \"03648482-256b-4fd0-94f3-f5dd889f5d49\") " pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:10 crc kubenswrapper[4867]: I0214 05:52:10.015370 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:10 crc kubenswrapper[4867]: I0214 05:52:10.935341 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9mmhn"] Feb 14 05:52:10 crc kubenswrapper[4867]: W0214 05:52:10.944036 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03648482_256b_4fd0_94f3_f5dd889f5d49.slice/crio-4af7c7cb57e6c823cd8ca405b9ff517789ac1bc4c72ea321b165f7d9962baf0c WatchSource:0}: Error finding container 4af7c7cb57e6c823cd8ca405b9ff517789ac1bc4c72ea321b165f7d9962baf0c: Status 404 returned error can't find the container with id 4af7c7cb57e6c823cd8ca405b9ff517789ac1bc4c72ea321b165f7d9962baf0c Feb 14 05:52:11 crc kubenswrapper[4867]: I0214 05:52:11.350094 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-gxd4x" podUID="90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" containerName="registry-server" probeResult="failure" output=< Feb 14 05:52:11 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:52:11 crc kubenswrapper[4867]: > Feb 14 05:52:11 crc kubenswrapper[4867]: I0214 05:52:11.649868 4867 generic.go:334] "Generic (PLEG): container finished" podID="03648482-256b-4fd0-94f3-f5dd889f5d49" containerID="3acf98add1207742a1f3d6ba0589024876be4929f2133bb35c96811ccecaba3d" exitCode=0 Feb 14 05:52:11 crc kubenswrapper[4867]: I0214 05:52:11.649913 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mmhn" event={"ID":"03648482-256b-4fd0-94f3-f5dd889f5d49","Type":"ContainerDied","Data":"3acf98add1207742a1f3d6ba0589024876be4929f2133bb35c96811ccecaba3d"} Feb 14 05:52:11 crc kubenswrapper[4867]: I0214 05:52:11.649938 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mmhn" event={"ID":"03648482-256b-4fd0-94f3-f5dd889f5d49","Type":"ContainerStarted","Data":"4af7c7cb57e6c823cd8ca405b9ff517789ac1bc4c72ea321b165f7d9962baf0c"} Feb 14 05:52:12 crc kubenswrapper[4867]: I0214 05:52:12.662127 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mmhn" event={"ID":"03648482-256b-4fd0-94f3-f5dd889f5d49","Type":"ContainerStarted","Data":"8a71562eca20736776b1b289eef72b40cdd0bac2d1c9a667381ad0a06ca552e1"} Feb 14 05:52:16 crc kubenswrapper[4867]: I0214 05:52:16.712026 4867 generic.go:334] "Generic (PLEG): container finished" podID="03648482-256b-4fd0-94f3-f5dd889f5d49" containerID="8a71562eca20736776b1b289eef72b40cdd0bac2d1c9a667381ad0a06ca552e1" exitCode=0 Feb 14 05:52:16 crc kubenswrapper[4867]: I0214 05:52:16.712104 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mmhn" event={"ID":"03648482-256b-4fd0-94f3-f5dd889f5d49","Type":"ContainerDied","Data":"8a71562eca20736776b1b289eef72b40cdd0bac2d1c9a667381ad0a06ca552e1"} Feb 14 05:52:17 crc kubenswrapper[4867]: I0214 05:52:17.725520 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mmhn" event={"ID":"03648482-256b-4fd0-94f3-f5dd889f5d49","Type":"ContainerStarted","Data":"2d31332057a234b106a0d8f4134ffef0fa3c66cfd6f0489e8bd73f6fbaee3b8d"} Feb 14 05:52:17 crc kubenswrapper[4867]: I0214 05:52:17.762676 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9mmhn" podStartSLOduration=3.300983856 podStartE2EDuration="8.762649406s" podCreationTimestamp="2026-02-14 05:52:09 +0000 UTC" firstStartedPulling="2026-02-14 05:52:11.652469649 +0000 UTC m=+6163.733406963" lastFinishedPulling="2026-02-14 05:52:17.114135199 +0000 UTC m=+6169.195072513" observedRunningTime="2026-02-14 05:52:17.750725463 +0000 UTC m=+6169.831662787" watchObservedRunningTime="2026-02-14 05:52:17.762649406 +0000 UTC m=+6169.843586720" Feb 14 05:52:20 crc kubenswrapper[4867]: I0214 05:52:20.016399 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:20 crc kubenswrapper[4867]: I0214 05:52:20.017047 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:20 crc kubenswrapper[4867]: I0214 05:52:20.074704 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:20 crc kubenswrapper[4867]: I0214 05:52:20.347501 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:52:20 crc kubenswrapper[4867]: I0214 05:52:20.400725 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:52:21 crc kubenswrapper[4867]: I0214 05:52:21.323312 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gxd4x"] Feb 14 05:52:21 crc kubenswrapper[4867]: I0214 05:52:21.813153 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gxd4x" podUID="90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" containerName="registry-server" containerID="cri-o://f3a664068966be5f0271feb5aa6cd4ab27234fbada6908923e0783a689fac623" gracePeriod=2 Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.651929 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.734721 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-catalog-content\") pod \"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925\" (UID: \"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925\") " Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.734901 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-utilities\") pod \"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925\" (UID: \"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925\") " Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.734949 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7l5cv\" (UniqueName: \"kubernetes.io/projected/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-kube-api-access-7l5cv\") pod \"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925\" (UID: \"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925\") " Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.736895 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-utilities" (OuterVolumeSpecName: "utilities") pod "90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" (UID: "90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.748001 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-kube-api-access-7l5cv" (OuterVolumeSpecName: "kube-api-access-7l5cv") pod "90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" (UID: "90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925"). InnerVolumeSpecName "kube-api-access-7l5cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.804381 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" (UID: "90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.828107 4867 generic.go:334] "Generic (PLEG): container finished" podID="90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" containerID="f3a664068966be5f0271feb5aa6cd4ab27234fbada6908923e0783a689fac623" exitCode=0 Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.828161 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxd4x" event={"ID":"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925","Type":"ContainerDied","Data":"f3a664068966be5f0271feb5aa6cd4ab27234fbada6908923e0783a689fac623"} Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.828214 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gxd4x" event={"ID":"90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925","Type":"ContainerDied","Data":"9ee331d8f9be369631f10654c158e87afe7a9d548a81fbe230376595ebd85ecc"} Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.828234 4867 scope.go:117] "RemoveContainer" containerID="f3a664068966be5f0271feb5aa6cd4ab27234fbada6908923e0783a689fac623" Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.828718 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gxd4x" Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.838139 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.838167 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7l5cv\" (UniqueName: \"kubernetes.io/projected/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-kube-api-access-7l5cv\") on node \"crc\" DevicePath \"\"" Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.838176 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.862424 4867 scope.go:117] "RemoveContainer" containerID="217bb34d1c9ef98f68a83eeb0567200efcaef13a371554b797dc554328ba880f" Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.875847 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gxd4x"] Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.892871 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gxd4x"] Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.893209 4867 scope.go:117] "RemoveContainer" containerID="4f87e7e2ede33a2b7fa8751c3558057ce168debd3fad80bbc97dfee71d9403f6" Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.949836 4867 scope.go:117] "RemoveContainer" containerID="f3a664068966be5f0271feb5aa6cd4ab27234fbada6908923e0783a689fac623" Feb 14 05:52:22 crc kubenswrapper[4867]: E0214 05:52:22.951020 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3a664068966be5f0271feb5aa6cd4ab27234fbada6908923e0783a689fac623\": container with ID starting with f3a664068966be5f0271feb5aa6cd4ab27234fbada6908923e0783a689fac623 not found: ID does not exist" containerID="f3a664068966be5f0271feb5aa6cd4ab27234fbada6908923e0783a689fac623" Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.951176 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3a664068966be5f0271feb5aa6cd4ab27234fbada6908923e0783a689fac623"} err="failed to get container status \"f3a664068966be5f0271feb5aa6cd4ab27234fbada6908923e0783a689fac623\": rpc error: code = NotFound desc = could not find container \"f3a664068966be5f0271feb5aa6cd4ab27234fbada6908923e0783a689fac623\": container with ID starting with f3a664068966be5f0271feb5aa6cd4ab27234fbada6908923e0783a689fac623 not found: ID does not exist" Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.951293 4867 scope.go:117] "RemoveContainer" containerID="217bb34d1c9ef98f68a83eeb0567200efcaef13a371554b797dc554328ba880f" Feb 14 05:52:22 crc kubenswrapper[4867]: E0214 05:52:22.952063 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"217bb34d1c9ef98f68a83eeb0567200efcaef13a371554b797dc554328ba880f\": container with ID starting with 217bb34d1c9ef98f68a83eeb0567200efcaef13a371554b797dc554328ba880f not found: ID does not exist" containerID="217bb34d1c9ef98f68a83eeb0567200efcaef13a371554b797dc554328ba880f" Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.952113 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"217bb34d1c9ef98f68a83eeb0567200efcaef13a371554b797dc554328ba880f"} err="failed to get container status \"217bb34d1c9ef98f68a83eeb0567200efcaef13a371554b797dc554328ba880f\": rpc error: code = NotFound desc = could not find container \"217bb34d1c9ef98f68a83eeb0567200efcaef13a371554b797dc554328ba880f\": container with ID starting with 217bb34d1c9ef98f68a83eeb0567200efcaef13a371554b797dc554328ba880f not found: ID does not exist" Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.952151 4867 scope.go:117] "RemoveContainer" containerID="4f87e7e2ede33a2b7fa8751c3558057ce168debd3fad80bbc97dfee71d9403f6" Feb 14 05:52:22 crc kubenswrapper[4867]: E0214 05:52:22.952634 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f87e7e2ede33a2b7fa8751c3558057ce168debd3fad80bbc97dfee71d9403f6\": container with ID starting with 4f87e7e2ede33a2b7fa8751c3558057ce168debd3fad80bbc97dfee71d9403f6 not found: ID does not exist" containerID="4f87e7e2ede33a2b7fa8751c3558057ce168debd3fad80bbc97dfee71d9403f6" Feb 14 05:52:22 crc kubenswrapper[4867]: I0214 05:52:22.952706 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f87e7e2ede33a2b7fa8751c3558057ce168debd3fad80bbc97dfee71d9403f6"} err="failed to get container status \"4f87e7e2ede33a2b7fa8751c3558057ce168debd3fad80bbc97dfee71d9403f6\": rpc error: code = NotFound desc = could not find container \"4f87e7e2ede33a2b7fa8751c3558057ce168debd3fad80bbc97dfee71d9403f6\": container with ID starting with 4f87e7e2ede33a2b7fa8751c3558057ce168debd3fad80bbc97dfee71d9403f6 not found: ID does not exist" Feb 14 05:52:23 crc kubenswrapper[4867]: I0214 05:52:23.018180 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" path="/var/lib/kubelet/pods/90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925/volumes" Feb 14 05:52:30 crc kubenswrapper[4867]: I0214 05:52:30.067425 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:30 crc kubenswrapper[4867]: I0214 05:52:30.123429 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9mmhn"] Feb 14 05:52:30 crc kubenswrapper[4867]: I0214 05:52:30.923833 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9mmhn" podUID="03648482-256b-4fd0-94f3-f5dd889f5d49" containerName="registry-server" containerID="cri-o://2d31332057a234b106a0d8f4134ffef0fa3c66cfd6f0489e8bd73f6fbaee3b8d" gracePeriod=2 Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.592729 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.668494 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03648482-256b-4fd0-94f3-f5dd889f5d49-utilities\") pod \"03648482-256b-4fd0-94f3-f5dd889f5d49\" (UID: \"03648482-256b-4fd0-94f3-f5dd889f5d49\") " Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.668657 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-958nm\" (UniqueName: \"kubernetes.io/projected/03648482-256b-4fd0-94f3-f5dd889f5d49-kube-api-access-958nm\") pod \"03648482-256b-4fd0-94f3-f5dd889f5d49\" (UID: \"03648482-256b-4fd0-94f3-f5dd889f5d49\") " Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.668807 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03648482-256b-4fd0-94f3-f5dd889f5d49-catalog-content\") pod \"03648482-256b-4fd0-94f3-f5dd889f5d49\" (UID: \"03648482-256b-4fd0-94f3-f5dd889f5d49\") " Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.669268 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03648482-256b-4fd0-94f3-f5dd889f5d49-utilities" (OuterVolumeSpecName: "utilities") pod "03648482-256b-4fd0-94f3-f5dd889f5d49" (UID: "03648482-256b-4fd0-94f3-f5dd889f5d49"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.669812 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03648482-256b-4fd0-94f3-f5dd889f5d49-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.674189 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03648482-256b-4fd0-94f3-f5dd889f5d49-kube-api-access-958nm" (OuterVolumeSpecName: "kube-api-access-958nm") pod "03648482-256b-4fd0-94f3-f5dd889f5d49" (UID: "03648482-256b-4fd0-94f3-f5dd889f5d49"). InnerVolumeSpecName "kube-api-access-958nm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.729972 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03648482-256b-4fd0-94f3-f5dd889f5d49-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03648482-256b-4fd0-94f3-f5dd889f5d49" (UID: "03648482-256b-4fd0-94f3-f5dd889f5d49"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.771963 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-958nm\" (UniqueName: \"kubernetes.io/projected/03648482-256b-4fd0-94f3-f5dd889f5d49-kube-api-access-958nm\") on node \"crc\" DevicePath \"\"" Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.772018 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03648482-256b-4fd0-94f3-f5dd889f5d49-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.939453 4867 generic.go:334] "Generic (PLEG): container finished" podID="03648482-256b-4fd0-94f3-f5dd889f5d49" containerID="2d31332057a234b106a0d8f4134ffef0fa3c66cfd6f0489e8bd73f6fbaee3b8d" exitCode=0 Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.939530 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mmhn" event={"ID":"03648482-256b-4fd0-94f3-f5dd889f5d49","Type":"ContainerDied","Data":"2d31332057a234b106a0d8f4134ffef0fa3c66cfd6f0489e8bd73f6fbaee3b8d"} Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.939570 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mmhn" event={"ID":"03648482-256b-4fd0-94f3-f5dd889f5d49","Type":"ContainerDied","Data":"4af7c7cb57e6c823cd8ca405b9ff517789ac1bc4c72ea321b165f7d9962baf0c"} Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.939599 4867 scope.go:117] "RemoveContainer" containerID="2d31332057a234b106a0d8f4134ffef0fa3c66cfd6f0489e8bd73f6fbaee3b8d" Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.939915 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9mmhn" Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.978627 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9mmhn"] Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.979406 4867 scope.go:117] "RemoveContainer" containerID="8a71562eca20736776b1b289eef72b40cdd0bac2d1c9a667381ad0a06ca552e1" Feb 14 05:52:31 crc kubenswrapper[4867]: I0214 05:52:31.989962 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9mmhn"] Feb 14 05:52:32 crc kubenswrapper[4867]: I0214 05:52:32.018444 4867 scope.go:117] "RemoveContainer" containerID="3acf98add1207742a1f3d6ba0589024876be4929f2133bb35c96811ccecaba3d" Feb 14 05:52:32 crc kubenswrapper[4867]: I0214 05:52:32.062918 4867 scope.go:117] "RemoveContainer" containerID="2d31332057a234b106a0d8f4134ffef0fa3c66cfd6f0489e8bd73f6fbaee3b8d" Feb 14 05:52:32 crc kubenswrapper[4867]: E0214 05:52:32.063348 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d31332057a234b106a0d8f4134ffef0fa3c66cfd6f0489e8bd73f6fbaee3b8d\": container with ID starting with 2d31332057a234b106a0d8f4134ffef0fa3c66cfd6f0489e8bd73f6fbaee3b8d not found: ID does not exist" containerID="2d31332057a234b106a0d8f4134ffef0fa3c66cfd6f0489e8bd73f6fbaee3b8d" Feb 14 05:52:32 crc kubenswrapper[4867]: I0214 05:52:32.063391 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d31332057a234b106a0d8f4134ffef0fa3c66cfd6f0489e8bd73f6fbaee3b8d"} err="failed to get container status \"2d31332057a234b106a0d8f4134ffef0fa3c66cfd6f0489e8bd73f6fbaee3b8d\": rpc error: code = NotFound desc = could not find container \"2d31332057a234b106a0d8f4134ffef0fa3c66cfd6f0489e8bd73f6fbaee3b8d\": container with ID starting with 2d31332057a234b106a0d8f4134ffef0fa3c66cfd6f0489e8bd73f6fbaee3b8d not found: ID does not exist" Feb 14 05:52:32 crc kubenswrapper[4867]: I0214 05:52:32.063417 4867 scope.go:117] "RemoveContainer" containerID="8a71562eca20736776b1b289eef72b40cdd0bac2d1c9a667381ad0a06ca552e1" Feb 14 05:52:32 crc kubenswrapper[4867]: E0214 05:52:32.063851 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a71562eca20736776b1b289eef72b40cdd0bac2d1c9a667381ad0a06ca552e1\": container with ID starting with 8a71562eca20736776b1b289eef72b40cdd0bac2d1c9a667381ad0a06ca552e1 not found: ID does not exist" containerID="8a71562eca20736776b1b289eef72b40cdd0bac2d1c9a667381ad0a06ca552e1" Feb 14 05:52:32 crc kubenswrapper[4867]: I0214 05:52:32.063880 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a71562eca20736776b1b289eef72b40cdd0bac2d1c9a667381ad0a06ca552e1"} err="failed to get container status \"8a71562eca20736776b1b289eef72b40cdd0bac2d1c9a667381ad0a06ca552e1\": rpc error: code = NotFound desc = could not find container \"8a71562eca20736776b1b289eef72b40cdd0bac2d1c9a667381ad0a06ca552e1\": container with ID starting with 8a71562eca20736776b1b289eef72b40cdd0bac2d1c9a667381ad0a06ca552e1 not found: ID does not exist" Feb 14 05:52:32 crc kubenswrapper[4867]: I0214 05:52:32.063900 4867 scope.go:117] "RemoveContainer" containerID="3acf98add1207742a1f3d6ba0589024876be4929f2133bb35c96811ccecaba3d" Feb 14 05:52:32 crc kubenswrapper[4867]: E0214 05:52:32.064186 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3acf98add1207742a1f3d6ba0589024876be4929f2133bb35c96811ccecaba3d\": container with ID starting with 3acf98add1207742a1f3d6ba0589024876be4929f2133bb35c96811ccecaba3d not found: ID does not exist" containerID="3acf98add1207742a1f3d6ba0589024876be4929f2133bb35c96811ccecaba3d" Feb 14 05:52:32 crc kubenswrapper[4867]: I0214 05:52:32.064221 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3acf98add1207742a1f3d6ba0589024876be4929f2133bb35c96811ccecaba3d"} err="failed to get container status \"3acf98add1207742a1f3d6ba0589024876be4929f2133bb35c96811ccecaba3d\": rpc error: code = NotFound desc = could not find container \"3acf98add1207742a1f3d6ba0589024876be4929f2133bb35c96811ccecaba3d\": container with ID starting with 3acf98add1207742a1f3d6ba0589024876be4929f2133bb35c96811ccecaba3d not found: ID does not exist" Feb 14 05:52:33 crc kubenswrapper[4867]: I0214 05:52:33.010162 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03648482-256b-4fd0-94f3-f5dd889f5d49" path="/var/lib/kubelet/pods/03648482-256b-4fd0-94f3-f5dd889f5d49/volumes" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.429178 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4l6q7"] Feb 14 05:52:34 crc kubenswrapper[4867]: E0214 05:52:34.429992 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03648482-256b-4fd0-94f3-f5dd889f5d49" containerName="extract-utilities" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.430005 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="03648482-256b-4fd0-94f3-f5dd889f5d49" containerName="extract-utilities" Feb 14 05:52:34 crc kubenswrapper[4867]: E0214 05:52:34.430027 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" containerName="extract-utilities" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.430034 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" containerName="extract-utilities" Feb 14 05:52:34 crc kubenswrapper[4867]: E0214 05:52:34.430059 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03648482-256b-4fd0-94f3-f5dd889f5d49" containerName="registry-server" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.430080 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="03648482-256b-4fd0-94f3-f5dd889f5d49" containerName="registry-server" Feb 14 05:52:34 crc kubenswrapper[4867]: E0214 05:52:34.430095 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03648482-256b-4fd0-94f3-f5dd889f5d49" containerName="extract-content" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.430101 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="03648482-256b-4fd0-94f3-f5dd889f5d49" containerName="extract-content" Feb 14 05:52:34 crc kubenswrapper[4867]: E0214 05:52:34.430109 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" containerName="registry-server" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.430115 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" containerName="registry-server" Feb 14 05:52:34 crc kubenswrapper[4867]: E0214 05:52:34.430147 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" containerName="extract-content" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.430156 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" containerName="extract-content" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.430438 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="03648482-256b-4fd0-94f3-f5dd889f5d49" containerName="registry-server" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.430455 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="90fd1cb8-cf3a-4f2d-ae19-49cf43cd4925" containerName="registry-server" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.434674 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.441034 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4l6q7"] Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.535183 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd017092-381d-4839-bd5f-b8177c576ab1-utilities\") pod \"redhat-marketplace-4l6q7\" (UID: \"dd017092-381d-4839-bd5f-b8177c576ab1\") " pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.535547 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd017092-381d-4839-bd5f-b8177c576ab1-catalog-content\") pod \"redhat-marketplace-4l6q7\" (UID: \"dd017092-381d-4839-bd5f-b8177c576ab1\") " pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.535848 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtkqr\" (UniqueName: \"kubernetes.io/projected/dd017092-381d-4839-bd5f-b8177c576ab1-kube-api-access-wtkqr\") pod \"redhat-marketplace-4l6q7\" (UID: \"dd017092-381d-4839-bd5f-b8177c576ab1\") " pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.636838 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd017092-381d-4839-bd5f-b8177c576ab1-catalog-content\") pod \"redhat-marketplace-4l6q7\" (UID: \"dd017092-381d-4839-bd5f-b8177c576ab1\") " pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.637013 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtkqr\" (UniqueName: \"kubernetes.io/projected/dd017092-381d-4839-bd5f-b8177c576ab1-kube-api-access-wtkqr\") pod \"redhat-marketplace-4l6q7\" (UID: \"dd017092-381d-4839-bd5f-b8177c576ab1\") " pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.637097 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd017092-381d-4839-bd5f-b8177c576ab1-utilities\") pod \"redhat-marketplace-4l6q7\" (UID: \"dd017092-381d-4839-bd5f-b8177c576ab1\") " pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.637434 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd017092-381d-4839-bd5f-b8177c576ab1-catalog-content\") pod \"redhat-marketplace-4l6q7\" (UID: \"dd017092-381d-4839-bd5f-b8177c576ab1\") " pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.637629 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd017092-381d-4839-bd5f-b8177c576ab1-utilities\") pod \"redhat-marketplace-4l6q7\" (UID: \"dd017092-381d-4839-bd5f-b8177c576ab1\") " pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.661677 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtkqr\" (UniqueName: \"kubernetes.io/projected/dd017092-381d-4839-bd5f-b8177c576ab1-kube-api-access-wtkqr\") pod \"redhat-marketplace-4l6q7\" (UID: \"dd017092-381d-4839-bd5f-b8177c576ab1\") " pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.754091 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.984617 4867 generic.go:334] "Generic (PLEG): container finished" podID="b8b6ff93-1581-48eb-b74d-f7c97cdb1918" containerID="fd380d7db84361518f8a7673c0c88c1dc8ce8c1cbbe679b0aafd4c0d3248660f" exitCode=0 Feb 14 05:52:34 crc kubenswrapper[4867]: I0214 05:52:34.984657 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rtzc7/crc-debug-tz25z" event={"ID":"b8b6ff93-1581-48eb-b74d-f7c97cdb1918","Type":"ContainerDied","Data":"fd380d7db84361518f8a7673c0c88c1dc8ce8c1cbbe679b0aafd4c0d3248660f"} Feb 14 05:52:35 crc kubenswrapper[4867]: I0214 05:52:35.261335 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4l6q7"] Feb 14 05:52:36 crc kubenswrapper[4867]: I0214 05:52:36.001597 4867 generic.go:334] "Generic (PLEG): container finished" podID="dd017092-381d-4839-bd5f-b8177c576ab1" containerID="42597f52fbf5596dc6acabedb00d0864ac7884ca48008427361f865ec674d43f" exitCode=0 Feb 14 05:52:36 crc kubenswrapper[4867]: I0214 05:52:36.001710 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4l6q7" event={"ID":"dd017092-381d-4839-bd5f-b8177c576ab1","Type":"ContainerDied","Data":"42597f52fbf5596dc6acabedb00d0864ac7884ca48008427361f865ec674d43f"} Feb 14 05:52:36 crc kubenswrapper[4867]: I0214 05:52:36.002078 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4l6q7" event={"ID":"dd017092-381d-4839-bd5f-b8177c576ab1","Type":"ContainerStarted","Data":"6e4058288a301527f9e48b146670a568dad7aaba4b896ef17241b2794faaef0b"} Feb 14 05:52:36 crc kubenswrapper[4867]: I0214 05:52:36.147270 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rtzc7/crc-debug-tz25z" Feb 14 05:52:36 crc kubenswrapper[4867]: I0214 05:52:36.185134 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rtzc7/crc-debug-tz25z"] Feb 14 05:52:36 crc kubenswrapper[4867]: I0214 05:52:36.196130 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rtzc7/crc-debug-tz25z"] Feb 14 05:52:36 crc kubenswrapper[4867]: I0214 05:52:36.276229 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j95rn\" (UniqueName: \"kubernetes.io/projected/b8b6ff93-1581-48eb-b74d-f7c97cdb1918-kube-api-access-j95rn\") pod \"b8b6ff93-1581-48eb-b74d-f7c97cdb1918\" (UID: \"b8b6ff93-1581-48eb-b74d-f7c97cdb1918\") " Feb 14 05:52:36 crc kubenswrapper[4867]: I0214 05:52:36.276819 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b8b6ff93-1581-48eb-b74d-f7c97cdb1918-host\") pod \"b8b6ff93-1581-48eb-b74d-f7c97cdb1918\" (UID: \"b8b6ff93-1581-48eb-b74d-f7c97cdb1918\") " Feb 14 05:52:36 crc kubenswrapper[4867]: I0214 05:52:36.276944 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8b6ff93-1581-48eb-b74d-f7c97cdb1918-host" (OuterVolumeSpecName: "host") pod "b8b6ff93-1581-48eb-b74d-f7c97cdb1918" (UID: "b8b6ff93-1581-48eb-b74d-f7c97cdb1918"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 05:52:36 crc kubenswrapper[4867]: I0214 05:52:36.278181 4867 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b8b6ff93-1581-48eb-b74d-f7c97cdb1918-host\") on node \"crc\" DevicePath \"\"" Feb 14 05:52:36 crc kubenswrapper[4867]: I0214 05:52:36.285810 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8b6ff93-1581-48eb-b74d-f7c97cdb1918-kube-api-access-j95rn" (OuterVolumeSpecName: "kube-api-access-j95rn") pod "b8b6ff93-1581-48eb-b74d-f7c97cdb1918" (UID: "b8b6ff93-1581-48eb-b74d-f7c97cdb1918"). InnerVolumeSpecName "kube-api-access-j95rn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:52:36 crc kubenswrapper[4867]: I0214 05:52:36.380805 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j95rn\" (UniqueName: \"kubernetes.io/projected/b8b6ff93-1581-48eb-b74d-f7c97cdb1918-kube-api-access-j95rn\") on node \"crc\" DevicePath \"\"" Feb 14 05:52:37 crc kubenswrapper[4867]: I0214 05:52:37.012480 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8b6ff93-1581-48eb-b74d-f7c97cdb1918" path="/var/lib/kubelet/pods/b8b6ff93-1581-48eb-b74d-f7c97cdb1918/volumes" Feb 14 05:52:37 crc kubenswrapper[4867]: I0214 05:52:37.015883 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rtzc7/crc-debug-tz25z" Feb 14 05:52:37 crc kubenswrapper[4867]: I0214 05:52:37.015891 4867 scope.go:117] "RemoveContainer" containerID="fd380d7db84361518f8a7673c0c88c1dc8ce8c1cbbe679b0aafd4c0d3248660f" Feb 14 05:52:37 crc kubenswrapper[4867]: I0214 05:52:37.018697 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4l6q7" event={"ID":"dd017092-381d-4839-bd5f-b8177c576ab1","Type":"ContainerStarted","Data":"34866708ac4aa71a03cad90f0816023d1803dec24c4a0beaa1d97dab3b2fee24"} Feb 14 05:52:37 crc kubenswrapper[4867]: I0214 05:52:37.359083 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rtzc7/crc-debug-4hxqk"] Feb 14 05:52:37 crc kubenswrapper[4867]: E0214 05:52:37.360076 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8b6ff93-1581-48eb-b74d-f7c97cdb1918" containerName="container-00" Feb 14 05:52:37 crc kubenswrapper[4867]: I0214 05:52:37.360109 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8b6ff93-1581-48eb-b74d-f7c97cdb1918" containerName="container-00" Feb 14 05:52:37 crc kubenswrapper[4867]: I0214 05:52:37.360451 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8b6ff93-1581-48eb-b74d-f7c97cdb1918" containerName="container-00" Feb 14 05:52:37 crc kubenswrapper[4867]: I0214 05:52:37.361747 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rtzc7/crc-debug-4hxqk" Feb 14 05:52:37 crc kubenswrapper[4867]: I0214 05:52:37.363845 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-rtzc7"/"default-dockercfg-kt9b9" Feb 14 05:52:37 crc kubenswrapper[4867]: I0214 05:52:37.505776 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f6ac22fd-fd3d-4423-885f-165f2cfb3e40-host\") pod \"crc-debug-4hxqk\" (UID: \"f6ac22fd-fd3d-4423-885f-165f2cfb3e40\") " pod="openshift-must-gather-rtzc7/crc-debug-4hxqk" Feb 14 05:52:37 crc kubenswrapper[4867]: I0214 05:52:37.505932 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58cd8\" (UniqueName: \"kubernetes.io/projected/f6ac22fd-fd3d-4423-885f-165f2cfb3e40-kube-api-access-58cd8\") pod \"crc-debug-4hxqk\" (UID: \"f6ac22fd-fd3d-4423-885f-165f2cfb3e40\") " pod="openshift-must-gather-rtzc7/crc-debug-4hxqk" Feb 14 05:52:37 crc kubenswrapper[4867]: I0214 05:52:37.608397 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58cd8\" (UniqueName: \"kubernetes.io/projected/f6ac22fd-fd3d-4423-885f-165f2cfb3e40-kube-api-access-58cd8\") pod \"crc-debug-4hxqk\" (UID: \"f6ac22fd-fd3d-4423-885f-165f2cfb3e40\") " pod="openshift-must-gather-rtzc7/crc-debug-4hxqk" Feb 14 05:52:37 crc kubenswrapper[4867]: I0214 05:52:37.608610 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f6ac22fd-fd3d-4423-885f-165f2cfb3e40-host\") pod \"crc-debug-4hxqk\" (UID: \"f6ac22fd-fd3d-4423-885f-165f2cfb3e40\") " pod="openshift-must-gather-rtzc7/crc-debug-4hxqk" Feb 14 05:52:37 crc kubenswrapper[4867]: I0214 05:52:37.608903 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f6ac22fd-fd3d-4423-885f-165f2cfb3e40-host\") pod \"crc-debug-4hxqk\" (UID: \"f6ac22fd-fd3d-4423-885f-165f2cfb3e40\") " pod="openshift-must-gather-rtzc7/crc-debug-4hxqk" Feb 14 05:52:37 crc kubenswrapper[4867]: I0214 05:52:37.631156 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58cd8\" (UniqueName: \"kubernetes.io/projected/f6ac22fd-fd3d-4423-885f-165f2cfb3e40-kube-api-access-58cd8\") pod \"crc-debug-4hxqk\" (UID: \"f6ac22fd-fd3d-4423-885f-165f2cfb3e40\") " pod="openshift-must-gather-rtzc7/crc-debug-4hxqk" Feb 14 05:52:37 crc kubenswrapper[4867]: I0214 05:52:37.695953 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rtzc7/crc-debug-4hxqk" Feb 14 05:52:37 crc kubenswrapper[4867]: W0214 05:52:37.723548 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6ac22fd_fd3d_4423_885f_165f2cfb3e40.slice/crio-83e31f3555d0823b5e024a7d8a5547c1b122056309f05c4836e880c92d5acc5a WatchSource:0}: Error finding container 83e31f3555d0823b5e024a7d8a5547c1b122056309f05c4836e880c92d5acc5a: Status 404 returned error can't find the container with id 83e31f3555d0823b5e024a7d8a5547c1b122056309f05c4836e880c92d5acc5a Feb 14 05:52:38 crc kubenswrapper[4867]: I0214 05:52:38.033175 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rtzc7/crc-debug-4hxqk" event={"ID":"f6ac22fd-fd3d-4423-885f-165f2cfb3e40","Type":"ContainerStarted","Data":"d33a1717567f6a680e45e042d96f55e350bd59885fe58a65c1000f62e337ee63"} Feb 14 05:52:38 crc kubenswrapper[4867]: I0214 05:52:38.033218 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rtzc7/crc-debug-4hxqk" event={"ID":"f6ac22fd-fd3d-4423-885f-165f2cfb3e40","Type":"ContainerStarted","Data":"83e31f3555d0823b5e024a7d8a5547c1b122056309f05c4836e880c92d5acc5a"} Feb 14 05:52:38 crc kubenswrapper[4867]: I0214 05:52:38.038960 4867 generic.go:334] "Generic (PLEG): container finished" podID="dd017092-381d-4839-bd5f-b8177c576ab1" containerID="34866708ac4aa71a03cad90f0816023d1803dec24c4a0beaa1d97dab3b2fee24" exitCode=0 Feb 14 05:52:38 crc kubenswrapper[4867]: I0214 05:52:38.039047 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4l6q7" event={"ID":"dd017092-381d-4839-bd5f-b8177c576ab1","Type":"ContainerDied","Data":"34866708ac4aa71a03cad90f0816023d1803dec24c4a0beaa1d97dab3b2fee24"} Feb 14 05:52:38 crc kubenswrapper[4867]: I0214 05:52:38.061994 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rtzc7/crc-debug-4hxqk" podStartSLOduration=1.061971412 podStartE2EDuration="1.061971412s" podCreationTimestamp="2026-02-14 05:52:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 05:52:38.045782227 +0000 UTC m=+6190.126719541" watchObservedRunningTime="2026-02-14 05:52:38.061971412 +0000 UTC m=+6190.142908736" Feb 14 05:52:39 crc kubenswrapper[4867]: I0214 05:52:39.050936 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4l6q7" event={"ID":"dd017092-381d-4839-bd5f-b8177c576ab1","Type":"ContainerStarted","Data":"7f3fa20b1289d32c7b1976f73449e5acc7b19d562021b6942a508a80adbfa767"} Feb 14 05:52:39 crc kubenswrapper[4867]: I0214 05:52:39.053549 4867 generic.go:334] "Generic (PLEG): container finished" podID="f6ac22fd-fd3d-4423-885f-165f2cfb3e40" containerID="d33a1717567f6a680e45e042d96f55e350bd59885fe58a65c1000f62e337ee63" exitCode=0 Feb 14 05:52:39 crc kubenswrapper[4867]: I0214 05:52:39.053582 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rtzc7/crc-debug-4hxqk" event={"ID":"f6ac22fd-fd3d-4423-885f-165f2cfb3e40","Type":"ContainerDied","Data":"d33a1717567f6a680e45e042d96f55e350bd59885fe58a65c1000f62e337ee63"} Feb 14 05:52:39 crc kubenswrapper[4867]: I0214 05:52:39.087114 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4l6q7" podStartSLOduration=2.448085352 podStartE2EDuration="5.087095792s" podCreationTimestamp="2026-02-14 05:52:34 +0000 UTC" firstStartedPulling="2026-02-14 05:52:36.004097651 +0000 UTC m=+6188.085034965" lastFinishedPulling="2026-02-14 05:52:38.643108101 +0000 UTC m=+6190.724045405" observedRunningTime="2026-02-14 05:52:39.074196963 +0000 UTC m=+6191.155134277" watchObservedRunningTime="2026-02-14 05:52:39.087095792 +0000 UTC m=+6191.168033106" Feb 14 05:52:40 crc kubenswrapper[4867]: I0214 05:52:40.195942 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rtzc7/crc-debug-4hxqk" Feb 14 05:52:40 crc kubenswrapper[4867]: I0214 05:52:40.271383 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rtzc7/crc-debug-4hxqk"] Feb 14 05:52:40 crc kubenswrapper[4867]: I0214 05:52:40.286031 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rtzc7/crc-debug-4hxqk"] Feb 14 05:52:40 crc kubenswrapper[4867]: I0214 05:52:40.368744 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58cd8\" (UniqueName: \"kubernetes.io/projected/f6ac22fd-fd3d-4423-885f-165f2cfb3e40-kube-api-access-58cd8\") pod \"f6ac22fd-fd3d-4423-885f-165f2cfb3e40\" (UID: \"f6ac22fd-fd3d-4423-885f-165f2cfb3e40\") " Feb 14 05:52:40 crc kubenswrapper[4867]: I0214 05:52:40.368988 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f6ac22fd-fd3d-4423-885f-165f2cfb3e40-host\") pod \"f6ac22fd-fd3d-4423-885f-165f2cfb3e40\" (UID: \"f6ac22fd-fd3d-4423-885f-165f2cfb3e40\") " Feb 14 05:52:40 crc kubenswrapper[4867]: I0214 05:52:40.369337 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6ac22fd-fd3d-4423-885f-165f2cfb3e40-host" (OuterVolumeSpecName: "host") pod "f6ac22fd-fd3d-4423-885f-165f2cfb3e40" (UID: "f6ac22fd-fd3d-4423-885f-165f2cfb3e40"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 05:52:40 crc kubenswrapper[4867]: I0214 05:52:40.369866 4867 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f6ac22fd-fd3d-4423-885f-165f2cfb3e40-host\") on node \"crc\" DevicePath \"\"" Feb 14 05:52:40 crc kubenswrapper[4867]: I0214 05:52:40.375856 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6ac22fd-fd3d-4423-885f-165f2cfb3e40-kube-api-access-58cd8" (OuterVolumeSpecName: "kube-api-access-58cd8") pod "f6ac22fd-fd3d-4423-885f-165f2cfb3e40" (UID: "f6ac22fd-fd3d-4423-885f-165f2cfb3e40"). InnerVolumeSpecName "kube-api-access-58cd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:52:40 crc kubenswrapper[4867]: I0214 05:52:40.472320 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58cd8\" (UniqueName: \"kubernetes.io/projected/f6ac22fd-fd3d-4423-885f-165f2cfb3e40-kube-api-access-58cd8\") on node \"crc\" DevicePath \"\"" Feb 14 05:52:41 crc kubenswrapper[4867]: I0214 05:52:41.025601 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6ac22fd-fd3d-4423-885f-165f2cfb3e40" path="/var/lib/kubelet/pods/f6ac22fd-fd3d-4423-885f-165f2cfb3e40/volumes" Feb 14 05:52:41 crc kubenswrapper[4867]: I0214 05:52:41.078311 4867 scope.go:117] "RemoveContainer" containerID="d33a1717567f6a680e45e042d96f55e350bd59885fe58a65c1000f62e337ee63" Feb 14 05:52:41 crc kubenswrapper[4867]: I0214 05:52:41.078343 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rtzc7/crc-debug-4hxqk" Feb 14 05:52:41 crc kubenswrapper[4867]: I0214 05:52:41.417396 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rtzc7/crc-debug-ppckp"] Feb 14 05:52:41 crc kubenswrapper[4867]: E0214 05:52:41.418181 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ac22fd-fd3d-4423-885f-165f2cfb3e40" containerName="container-00" Feb 14 05:52:41 crc kubenswrapper[4867]: I0214 05:52:41.418194 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ac22fd-fd3d-4423-885f-165f2cfb3e40" containerName="container-00" Feb 14 05:52:41 crc kubenswrapper[4867]: I0214 05:52:41.418493 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6ac22fd-fd3d-4423-885f-165f2cfb3e40" containerName="container-00" Feb 14 05:52:41 crc kubenswrapper[4867]: I0214 05:52:41.419375 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rtzc7/crc-debug-ppckp" Feb 14 05:52:41 crc kubenswrapper[4867]: I0214 05:52:41.424685 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-rtzc7"/"default-dockercfg-kt9b9" Feb 14 05:52:41 crc kubenswrapper[4867]: I0214 05:52:41.596807 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/abe60a3f-52b5-45a9-8603-17020367713d-host\") pod \"crc-debug-ppckp\" (UID: \"abe60a3f-52b5-45a9-8603-17020367713d\") " pod="openshift-must-gather-rtzc7/crc-debug-ppckp" Feb 14 05:52:41 crc kubenswrapper[4867]: I0214 05:52:41.596986 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2dgd\" (UniqueName: \"kubernetes.io/projected/abe60a3f-52b5-45a9-8603-17020367713d-kube-api-access-p2dgd\") pod \"crc-debug-ppckp\" (UID: \"abe60a3f-52b5-45a9-8603-17020367713d\") " pod="openshift-must-gather-rtzc7/crc-debug-ppckp" Feb 14 05:52:41 crc kubenswrapper[4867]: I0214 05:52:41.699744 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2dgd\" (UniqueName: \"kubernetes.io/projected/abe60a3f-52b5-45a9-8603-17020367713d-kube-api-access-p2dgd\") pod \"crc-debug-ppckp\" (UID: \"abe60a3f-52b5-45a9-8603-17020367713d\") " pod="openshift-must-gather-rtzc7/crc-debug-ppckp" Feb 14 05:52:41 crc kubenswrapper[4867]: I0214 05:52:41.700167 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/abe60a3f-52b5-45a9-8603-17020367713d-host\") pod \"crc-debug-ppckp\" (UID: \"abe60a3f-52b5-45a9-8603-17020367713d\") " pod="openshift-must-gather-rtzc7/crc-debug-ppckp" Feb 14 05:52:41 crc kubenswrapper[4867]: I0214 05:52:41.700492 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/abe60a3f-52b5-45a9-8603-17020367713d-host\") pod \"crc-debug-ppckp\" (UID: \"abe60a3f-52b5-45a9-8603-17020367713d\") " pod="openshift-must-gather-rtzc7/crc-debug-ppckp" Feb 14 05:52:41 crc kubenswrapper[4867]: I0214 05:52:41.721180 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2dgd\" (UniqueName: \"kubernetes.io/projected/abe60a3f-52b5-45a9-8603-17020367713d-kube-api-access-p2dgd\") pod \"crc-debug-ppckp\" (UID: \"abe60a3f-52b5-45a9-8603-17020367713d\") " pod="openshift-must-gather-rtzc7/crc-debug-ppckp" Feb 14 05:52:41 crc kubenswrapper[4867]: I0214 05:52:41.736294 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rtzc7/crc-debug-ppckp" Feb 14 05:52:41 crc kubenswrapper[4867]: W0214 05:52:41.785022 4867 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabe60a3f_52b5_45a9_8603_17020367713d.slice/crio-9d90e3055d5802151f70dd3eda3bc8ae23806b0cd5b5f89e99084cb4aa991c4f WatchSource:0}: Error finding container 9d90e3055d5802151f70dd3eda3bc8ae23806b0cd5b5f89e99084cb4aa991c4f: Status 404 returned error can't find the container with id 9d90e3055d5802151f70dd3eda3bc8ae23806b0cd5b5f89e99084cb4aa991c4f Feb 14 05:52:42 crc kubenswrapper[4867]: I0214 05:52:42.092871 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rtzc7/crc-debug-ppckp" event={"ID":"abe60a3f-52b5-45a9-8603-17020367713d","Type":"ContainerStarted","Data":"9d90e3055d5802151f70dd3eda3bc8ae23806b0cd5b5f89e99084cb4aa991c4f"} Feb 14 05:52:43 crc kubenswrapper[4867]: I0214 05:52:43.107600 4867 generic.go:334] "Generic (PLEG): container finished" podID="abe60a3f-52b5-45a9-8603-17020367713d" containerID="6b5cffd7e072900308ed2bccccbaeab058d3de9d59f219fa2df9bcdc2a813ccc" exitCode=0 Feb 14 05:52:43 crc kubenswrapper[4867]: I0214 05:52:43.107831 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rtzc7/crc-debug-ppckp" event={"ID":"abe60a3f-52b5-45a9-8603-17020367713d","Type":"ContainerDied","Data":"6b5cffd7e072900308ed2bccccbaeab058d3de9d59f219fa2df9bcdc2a813ccc"} Feb 14 05:52:43 crc kubenswrapper[4867]: I0214 05:52:43.163770 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rtzc7/crc-debug-ppckp"] Feb 14 05:52:43 crc kubenswrapper[4867]: I0214 05:52:43.174832 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rtzc7/crc-debug-ppckp"] Feb 14 05:52:44 crc kubenswrapper[4867]: I0214 05:52:44.278800 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rtzc7/crc-debug-ppckp" Feb 14 05:52:44 crc kubenswrapper[4867]: I0214 05:52:44.361152 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/abe60a3f-52b5-45a9-8603-17020367713d-host\") pod \"abe60a3f-52b5-45a9-8603-17020367713d\" (UID: \"abe60a3f-52b5-45a9-8603-17020367713d\") " Feb 14 05:52:44 crc kubenswrapper[4867]: I0214 05:52:44.361213 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2dgd\" (UniqueName: \"kubernetes.io/projected/abe60a3f-52b5-45a9-8603-17020367713d-kube-api-access-p2dgd\") pod \"abe60a3f-52b5-45a9-8603-17020367713d\" (UID: \"abe60a3f-52b5-45a9-8603-17020367713d\") " Feb 14 05:52:44 crc kubenswrapper[4867]: I0214 05:52:44.361297 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abe60a3f-52b5-45a9-8603-17020367713d-host" (OuterVolumeSpecName: "host") pod "abe60a3f-52b5-45a9-8603-17020367713d" (UID: "abe60a3f-52b5-45a9-8603-17020367713d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 14 05:52:44 crc kubenswrapper[4867]: I0214 05:52:44.362449 4867 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/abe60a3f-52b5-45a9-8603-17020367713d-host\") on node \"crc\" DevicePath \"\"" Feb 14 05:52:44 crc kubenswrapper[4867]: I0214 05:52:44.369663 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abe60a3f-52b5-45a9-8603-17020367713d-kube-api-access-p2dgd" (OuterVolumeSpecName: "kube-api-access-p2dgd") pod "abe60a3f-52b5-45a9-8603-17020367713d" (UID: "abe60a3f-52b5-45a9-8603-17020367713d"). InnerVolumeSpecName "kube-api-access-p2dgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:52:44 crc kubenswrapper[4867]: I0214 05:52:44.464885 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2dgd\" (UniqueName: \"kubernetes.io/projected/abe60a3f-52b5-45a9-8603-17020367713d-kube-api-access-p2dgd\") on node \"crc\" DevicePath \"\"" Feb 14 05:52:44 crc kubenswrapper[4867]: I0214 05:52:44.755596 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:44 crc kubenswrapper[4867]: I0214 05:52:44.755869 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:44 crc kubenswrapper[4867]: I0214 05:52:44.809300 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:45 crc kubenswrapper[4867]: I0214 05:52:45.010079 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abe60a3f-52b5-45a9-8603-17020367713d" path="/var/lib/kubelet/pods/abe60a3f-52b5-45a9-8603-17020367713d/volumes" Feb 14 05:52:45 crc kubenswrapper[4867]: I0214 05:52:45.128824 4867 scope.go:117] "RemoveContainer" containerID="6b5cffd7e072900308ed2bccccbaeab058d3de9d59f219fa2df9bcdc2a813ccc" Feb 14 05:52:45 crc kubenswrapper[4867]: I0214 05:52:45.128838 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rtzc7/crc-debug-ppckp" Feb 14 05:52:45 crc kubenswrapper[4867]: I0214 05:52:45.194125 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:45 crc kubenswrapper[4867]: I0214 05:52:45.252201 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4l6q7"] Feb 14 05:52:47 crc kubenswrapper[4867]: I0214 05:52:47.157388 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4l6q7" podUID="dd017092-381d-4839-bd5f-b8177c576ab1" containerName="registry-server" containerID="cri-o://7f3fa20b1289d32c7b1976f73449e5acc7b19d562021b6942a508a80adbfa767" gracePeriod=2 Feb 14 05:52:47 crc kubenswrapper[4867]: I0214 05:52:47.727004 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:47 crc kubenswrapper[4867]: I0214 05:52:47.842521 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtkqr\" (UniqueName: \"kubernetes.io/projected/dd017092-381d-4839-bd5f-b8177c576ab1-kube-api-access-wtkqr\") pod \"dd017092-381d-4839-bd5f-b8177c576ab1\" (UID: \"dd017092-381d-4839-bd5f-b8177c576ab1\") " Feb 14 05:52:47 crc kubenswrapper[4867]: I0214 05:52:47.842601 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd017092-381d-4839-bd5f-b8177c576ab1-catalog-content\") pod \"dd017092-381d-4839-bd5f-b8177c576ab1\" (UID: \"dd017092-381d-4839-bd5f-b8177c576ab1\") " Feb 14 05:52:47 crc kubenswrapper[4867]: I0214 05:52:47.842678 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd017092-381d-4839-bd5f-b8177c576ab1-utilities\") pod \"dd017092-381d-4839-bd5f-b8177c576ab1\" (UID: \"dd017092-381d-4839-bd5f-b8177c576ab1\") " Feb 14 05:52:47 crc kubenswrapper[4867]: I0214 05:52:47.844018 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd017092-381d-4839-bd5f-b8177c576ab1-utilities" (OuterVolumeSpecName: "utilities") pod "dd017092-381d-4839-bd5f-b8177c576ab1" (UID: "dd017092-381d-4839-bd5f-b8177c576ab1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:52:47 crc kubenswrapper[4867]: I0214 05:52:47.852462 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd017092-381d-4839-bd5f-b8177c576ab1-kube-api-access-wtkqr" (OuterVolumeSpecName: "kube-api-access-wtkqr") pod "dd017092-381d-4839-bd5f-b8177c576ab1" (UID: "dd017092-381d-4839-bd5f-b8177c576ab1"). InnerVolumeSpecName "kube-api-access-wtkqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:52:47 crc kubenswrapper[4867]: I0214 05:52:47.871525 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd017092-381d-4839-bd5f-b8177c576ab1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd017092-381d-4839-bd5f-b8177c576ab1" (UID: "dd017092-381d-4839-bd5f-b8177c576ab1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:52:47 crc kubenswrapper[4867]: I0214 05:52:47.946057 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtkqr\" (UniqueName: \"kubernetes.io/projected/dd017092-381d-4839-bd5f-b8177c576ab1-kube-api-access-wtkqr\") on node \"crc\" DevicePath \"\"" Feb 14 05:52:47 crc kubenswrapper[4867]: I0214 05:52:47.946098 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd017092-381d-4839-bd5f-b8177c576ab1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:52:47 crc kubenswrapper[4867]: I0214 05:52:47.946109 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd017092-381d-4839-bd5f-b8177c576ab1-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:52:48 crc kubenswrapper[4867]: I0214 05:52:48.174370 4867 generic.go:334] "Generic (PLEG): container finished" podID="dd017092-381d-4839-bd5f-b8177c576ab1" containerID="7f3fa20b1289d32c7b1976f73449e5acc7b19d562021b6942a508a80adbfa767" exitCode=0 Feb 14 05:52:48 crc kubenswrapper[4867]: I0214 05:52:48.174433 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4l6q7" event={"ID":"dd017092-381d-4839-bd5f-b8177c576ab1","Type":"ContainerDied","Data":"7f3fa20b1289d32c7b1976f73449e5acc7b19d562021b6942a508a80adbfa767"} Feb 14 05:52:48 crc kubenswrapper[4867]: I0214 05:52:48.174470 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4l6q7" event={"ID":"dd017092-381d-4839-bd5f-b8177c576ab1","Type":"ContainerDied","Data":"6e4058288a301527f9e48b146670a568dad7aaba4b896ef17241b2794faaef0b"} Feb 14 05:52:48 crc kubenswrapper[4867]: I0214 05:52:48.174494 4867 scope.go:117] "RemoveContainer" containerID="7f3fa20b1289d32c7b1976f73449e5acc7b19d562021b6942a508a80adbfa767" Feb 14 05:52:48 crc kubenswrapper[4867]: I0214 05:52:48.174710 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4l6q7" Feb 14 05:52:48 crc kubenswrapper[4867]: I0214 05:52:48.203552 4867 scope.go:117] "RemoveContainer" containerID="34866708ac4aa71a03cad90f0816023d1803dec24c4a0beaa1d97dab3b2fee24" Feb 14 05:52:48 crc kubenswrapper[4867]: I0214 05:52:48.251400 4867 scope.go:117] "RemoveContainer" containerID="42597f52fbf5596dc6acabedb00d0864ac7884ca48008427361f865ec674d43f" Feb 14 05:52:48 crc kubenswrapper[4867]: I0214 05:52:48.290476 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4l6q7"] Feb 14 05:52:48 crc kubenswrapper[4867]: I0214 05:52:48.306098 4867 scope.go:117] "RemoveContainer" containerID="7f3fa20b1289d32c7b1976f73449e5acc7b19d562021b6942a508a80adbfa767" Feb 14 05:52:48 crc kubenswrapper[4867]: E0214 05:52:48.316495 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f3fa20b1289d32c7b1976f73449e5acc7b19d562021b6942a508a80adbfa767\": container with ID starting with 7f3fa20b1289d32c7b1976f73449e5acc7b19d562021b6942a508a80adbfa767 not found: ID does not exist" containerID="7f3fa20b1289d32c7b1976f73449e5acc7b19d562021b6942a508a80adbfa767" Feb 14 05:52:48 crc kubenswrapper[4867]: I0214 05:52:48.316586 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f3fa20b1289d32c7b1976f73449e5acc7b19d562021b6942a508a80adbfa767"} err="failed to get container status \"7f3fa20b1289d32c7b1976f73449e5acc7b19d562021b6942a508a80adbfa767\": rpc error: code = NotFound desc = could not find container \"7f3fa20b1289d32c7b1976f73449e5acc7b19d562021b6942a508a80adbfa767\": container with ID starting with 7f3fa20b1289d32c7b1976f73449e5acc7b19d562021b6942a508a80adbfa767 not found: ID does not exist" Feb 14 05:52:48 crc kubenswrapper[4867]: I0214 05:52:48.316616 4867 scope.go:117] "RemoveContainer" containerID="34866708ac4aa71a03cad90f0816023d1803dec24c4a0beaa1d97dab3b2fee24" Feb 14 05:52:48 crc kubenswrapper[4867]: E0214 05:52:48.317638 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34866708ac4aa71a03cad90f0816023d1803dec24c4a0beaa1d97dab3b2fee24\": container with ID starting with 34866708ac4aa71a03cad90f0816023d1803dec24c4a0beaa1d97dab3b2fee24 not found: ID does not exist" containerID="34866708ac4aa71a03cad90f0816023d1803dec24c4a0beaa1d97dab3b2fee24" Feb 14 05:52:48 crc kubenswrapper[4867]: I0214 05:52:48.317706 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34866708ac4aa71a03cad90f0816023d1803dec24c4a0beaa1d97dab3b2fee24"} err="failed to get container status \"34866708ac4aa71a03cad90f0816023d1803dec24c4a0beaa1d97dab3b2fee24\": rpc error: code = NotFound desc = could not find container \"34866708ac4aa71a03cad90f0816023d1803dec24c4a0beaa1d97dab3b2fee24\": container with ID starting with 34866708ac4aa71a03cad90f0816023d1803dec24c4a0beaa1d97dab3b2fee24 not found: ID does not exist" Feb 14 05:52:48 crc kubenswrapper[4867]: I0214 05:52:48.317727 4867 scope.go:117] "RemoveContainer" containerID="42597f52fbf5596dc6acabedb00d0864ac7884ca48008427361f865ec674d43f" Feb 14 05:52:48 crc kubenswrapper[4867]: E0214 05:52:48.318137 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42597f52fbf5596dc6acabedb00d0864ac7884ca48008427361f865ec674d43f\": container with ID starting with 42597f52fbf5596dc6acabedb00d0864ac7884ca48008427361f865ec674d43f not found: ID does not exist" containerID="42597f52fbf5596dc6acabedb00d0864ac7884ca48008427361f865ec674d43f" Feb 14 05:52:48 crc kubenswrapper[4867]: I0214 05:52:48.318224 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42597f52fbf5596dc6acabedb00d0864ac7884ca48008427361f865ec674d43f"} err="failed to get container status \"42597f52fbf5596dc6acabedb00d0864ac7884ca48008427361f865ec674d43f\": rpc error: code = NotFound desc = could not find container \"42597f52fbf5596dc6acabedb00d0864ac7884ca48008427361f865ec674d43f\": container with ID starting with 42597f52fbf5596dc6acabedb00d0864ac7884ca48008427361f865ec674d43f not found: ID does not exist" Feb 14 05:52:48 crc kubenswrapper[4867]: I0214 05:52:48.324669 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4l6q7"] Feb 14 05:52:48 crc kubenswrapper[4867]: E0214 05:52:48.353628 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd017092_381d_4839_bd5f_b8177c576ab1.slice/crio-6e4058288a301527f9e48b146670a568dad7aaba4b896ef17241b2794faaef0b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd017092_381d_4839_bd5f_b8177c576ab1.slice\": RecentStats: unable to find data in memory cache]" Feb 14 05:52:48 crc kubenswrapper[4867]: E0214 05:52:48.353964 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd017092_381d_4839_bd5f_b8177c576ab1.slice/crio-6e4058288a301527f9e48b146670a568dad7aaba4b896ef17241b2794faaef0b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd017092_381d_4839_bd5f_b8177c576ab1.slice\": RecentStats: unable to find data in memory cache]" Feb 14 05:52:49 crc kubenswrapper[4867]: I0214 05:52:49.011710 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd017092-381d-4839-bd5f-b8177c576ab1" path="/var/lib/kubelet/pods/dd017092-381d-4839-bd5f-b8177c576ab1/volumes" Feb 14 05:53:01 crc kubenswrapper[4867]: I0214 05:53:01.251078 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:53:01 crc kubenswrapper[4867]: I0214 05:53:01.252031 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:53:10 crc kubenswrapper[4867]: I0214 05:53:10.912379 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_532a3c72-e995-4be9-a7db-f288b6c1a311/aodh-api/0.log" Feb 14 05:53:11 crc kubenswrapper[4867]: I0214 05:53:11.116960 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_532a3c72-e995-4be9-a7db-f288b6c1a311/aodh-listener/0.log" Feb 14 05:53:11 crc kubenswrapper[4867]: I0214 05:53:11.149261 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_532a3c72-e995-4be9-a7db-f288b6c1a311/aodh-evaluator/0.log" Feb 14 05:53:11 crc kubenswrapper[4867]: I0214 05:53:11.226058 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_532a3c72-e995-4be9-a7db-f288b6c1a311/aodh-notifier/0.log" Feb 14 05:53:11 crc kubenswrapper[4867]: I0214 05:53:11.354876 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-584d8cfdf8-4lt8c_3375fa12-2e3a-431e-9341-72d5a213083e/barbican-api/0.log" Feb 14 05:53:11 crc kubenswrapper[4867]: I0214 05:53:11.357373 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-584d8cfdf8-4lt8c_3375fa12-2e3a-431e-9341-72d5a213083e/barbican-api-log/0.log" Feb 14 05:53:11 crc kubenswrapper[4867]: I0214 05:53:11.528612 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7f6876db8-kxmgv_4a4a3883-6484-4af9-a7f0-8dd5ee4da247/barbican-keystone-listener/0.log" Feb 14 05:53:11 crc kubenswrapper[4867]: I0214 05:53:11.607736 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7f6876db8-kxmgv_4a4a3883-6484-4af9-a7f0-8dd5ee4da247/barbican-keystone-listener-log/0.log" Feb 14 05:53:11 crc kubenswrapper[4867]: I0214 05:53:11.700576 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6cb8d59db5-hc7rx_6517b483-cb9c-465e-a7f0-f697b6ba3189/barbican-worker/0.log" Feb 14 05:53:11 crc kubenswrapper[4867]: I0214 05:53:11.785181 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6cb8d59db5-hc7rx_6517b483-cb9c-465e-a7f0-f697b6ba3189/barbican-worker-log/0.log" Feb 14 05:53:11 crc kubenswrapper[4867]: I0214 05:53:11.884808 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-d4nh9_e3d43ea0-54e7-4fd1-892d-bbc3d01a5321/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:12 crc kubenswrapper[4867]: I0214 05:53:12.037695 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_27437fd9-2bc5-48ac-9e34-e733da15dd2b/ceilometer-central-agent/1.log" Feb 14 05:53:12 crc kubenswrapper[4867]: I0214 05:53:12.142918 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_27437fd9-2bc5-48ac-9e34-e733da15dd2b/ceilometer-central-agent/0.log" Feb 14 05:53:12 crc kubenswrapper[4867]: I0214 05:53:12.168796 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_27437fd9-2bc5-48ac-9e34-e733da15dd2b/ceilometer-notification-agent/0.log" Feb 14 05:53:12 crc kubenswrapper[4867]: I0214 05:53:12.251781 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_27437fd9-2bc5-48ac-9e34-e733da15dd2b/proxy-httpd/0.log" Feb 14 05:53:12 crc kubenswrapper[4867]: I0214 05:53:12.261343 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_27437fd9-2bc5-48ac-9e34-e733da15dd2b/sg-core/0.log" Feb 14 05:53:12 crc kubenswrapper[4867]: I0214 05:53:12.455224 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_195db0d6-0991-48b6-a7a1-ad5311555ede/cinder-api/0.log" Feb 14 05:53:12 crc kubenswrapper[4867]: I0214 05:53:12.482721 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_195db0d6-0991-48b6-a7a1-ad5311555ede/cinder-api-log/0.log" Feb 14 05:53:12 crc kubenswrapper[4867]: I0214 05:53:12.643735 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_38c903d9-50f6-418b-84d5-7ee82e9d1e2f/cinder-scheduler/1.log" Feb 14 05:53:12 crc kubenswrapper[4867]: I0214 05:53:12.673569 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_38c903d9-50f6-418b-84d5-7ee82e9d1e2f/cinder-scheduler/0.log" Feb 14 05:53:12 crc kubenswrapper[4867]: I0214 05:53:12.738287 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_38c903d9-50f6-418b-84d5-7ee82e9d1e2f/probe/0.log" Feb 14 05:53:12 crc kubenswrapper[4867]: I0214 05:53:12.832643 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-fq9nf_a716bc3f-98b5-4c50-af5f-46de007bd255/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:12 crc kubenswrapper[4867]: I0214 05:53:12.961696 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-78rwr_e04d43db-dfbf-41c6-8b73-48ff87baa800/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:13 crc kubenswrapper[4867]: I0214 05:53:13.098690 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-tnn8p_2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6/init/0.log" Feb 14 05:53:13 crc kubenswrapper[4867]: I0214 05:53:13.288174 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-tnn8p_2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6/init/0.log" Feb 14 05:53:13 crc kubenswrapper[4867]: I0214 05:53:13.368259 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-tnn8p_2ff227b0-1fbd-4d96-9201-8ef0fb5a68a6/dnsmasq-dns/0.log" Feb 14 05:53:13 crc kubenswrapper[4867]: I0214 05:53:13.369218 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-jsmhs_879dee23-804e-4b8a-ac20-0546383202b0/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:13 crc kubenswrapper[4867]: I0214 05:53:13.624989 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f5e42dca-0c7d-485a-95bc-b26db4e12369/glance-httpd/0.log" Feb 14 05:53:13 crc kubenswrapper[4867]: I0214 05:53:13.664310 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f5e42dca-0c7d-485a-95bc-b26db4e12369/glance-log/0.log" Feb 14 05:53:13 crc kubenswrapper[4867]: I0214 05:53:13.874637 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b66304c6-61a4-4b8b-b77b-dd816c0a0890/glance-log/0.log" Feb 14 05:53:13 crc kubenswrapper[4867]: I0214 05:53:13.895058 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b66304c6-61a4-4b8b-b77b-dd816c0a0890/glance-httpd/0.log" Feb 14 05:53:14 crc kubenswrapper[4867]: I0214 05:53:14.739869 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-bvpr9_01cb12dd-9d34-4898-941a-05635d21630f/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:14 crc kubenswrapper[4867]: I0214 05:53:14.755582 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-7b479dbc77-k8ts7_fcce6a26-826f-4268-9007-2e3c4411450f/heat-engine/0.log" Feb 14 05:53:14 crc kubenswrapper[4867]: I0214 05:53:14.921065 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-64c645895b-sclxg_7996e855-fbe0-4324-a337-8841df83e714/heat-api/0.log" Feb 14 05:53:14 crc kubenswrapper[4867]: I0214 05:53:14.961335 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-57b4cc7645-246cl_24d4f5bc-b41b-4f17-977e-d36995a99521/heat-cfnapi/0.log" Feb 14 05:53:14 crc kubenswrapper[4867]: I0214 05:53:14.983640 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-c22xw_0b6f69a7-8ea6-48ad-aa0c-bd11b1efef10/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:15 crc kubenswrapper[4867]: I0214 05:53:15.225447 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29517421-jh7t8_dabbee2b-0869-439e-8c9c-f417ab44f850/keystone-cron/0.log" Feb 14 05:53:15 crc kubenswrapper[4867]: I0214 05:53:15.511269 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_89e70483-d3e8-4758-bb61-ae6147dd4f39/kube-state-metrics/0.log" Feb 14 05:53:15 crc kubenswrapper[4867]: I0214 05:53:15.556075 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-4rs2p_8ec3156c-bcce-4dee-8ce5-7773409e880e/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:15 crc kubenswrapper[4867]: I0214 05:53:15.803463 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7595b47f77-vtg9d_1ddcc862-a10c-487c-aaa4-0e93df9c0005/keystone-api/0.log" Feb 14 05:53:15 crc kubenswrapper[4867]: I0214 05:53:15.893643 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-jgnc5_6e133b22-e3ca-4be2-8e71-56b6ca79dab2/logging-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:16 crc kubenswrapper[4867]: I0214 05:53:16.065985 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_e9139dc7-b868-4f7c-9e7e-10e313ff1e10/mysqld-exporter/0.log" Feb 14 05:53:16 crc kubenswrapper[4867]: I0214 05:53:16.410840 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7886d5654f-wzr2s_d4a16bfe-366a-4143-932a-e0b51615c401/neutron-api/0.log" Feb 14 05:53:16 crc kubenswrapper[4867]: I0214 05:53:16.444729 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-wq44m_d07bc498-5b6c-465a-bda2-df814e9c19c8/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:16 crc kubenswrapper[4867]: I0214 05:53:16.477080 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7886d5654f-wzr2s_d4a16bfe-366a-4143-932a-e0b51615c401/neutron-httpd/0.log" Feb 14 05:53:17 crc kubenswrapper[4867]: I0214 05:53:17.153368 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_fdfa169f-f57f-4d9c-bef3-529878be941b/nova-cell0-conductor-conductor/0.log" Feb 14 05:53:17 crc kubenswrapper[4867]: I0214 05:53:17.398137 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_464bbcc9-1810-40bc-8773-bfa3e615b67b/nova-api-log/0.log" Feb 14 05:53:17 crc kubenswrapper[4867]: I0214 05:53:17.414209 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_e367f188-2aa4-4374-a768-92b8e463e40d/nova-cell1-conductor-conductor/0.log" Feb 14 05:53:17 crc kubenswrapper[4867]: I0214 05:53:17.749091 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_3e1bf5e4-7b04-4a47-aa41-e547815fc623/nova-cell1-novncproxy-novncproxy/0.log" Feb 14 05:53:17 crc kubenswrapper[4867]: I0214 05:53:17.771195 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_464bbcc9-1810-40bc-8773-bfa3e615b67b/nova-api-api/0.log" Feb 14 05:53:17 crc kubenswrapper[4867]: I0214 05:53:17.792960 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-s5lc4_8c3553e4-9d3b-4c1d-bbc3-35371d733c86/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:18 crc kubenswrapper[4867]: I0214 05:53:18.124537 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3748198f-49fe-4a76-bd81-4ad518a594e8/nova-metadata-log/0.log" Feb 14 05:53:18 crc kubenswrapper[4867]: I0214 05:53:18.596698 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_505de461-9e6f-4914-bf50-e2bf4149b566/mysql-bootstrap/0.log" Feb 14 05:53:18 crc kubenswrapper[4867]: I0214 05:53:18.639967 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_7bb228b6-c3a9-46ac-8c21-a8786c6ac11b/nova-scheduler-scheduler/0.log" Feb 14 05:53:18 crc kubenswrapper[4867]: I0214 05:53:18.855891 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_505de461-9e6f-4914-bf50-e2bf4149b566/mysql-bootstrap/0.log" Feb 14 05:53:18 crc kubenswrapper[4867]: I0214 05:53:18.943361 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_505de461-9e6f-4914-bf50-e2bf4149b566/galera/1.log" Feb 14 05:53:18 crc kubenswrapper[4867]: I0214 05:53:18.947361 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_505de461-9e6f-4914-bf50-e2bf4149b566/galera/0.log" Feb 14 05:53:19 crc kubenswrapper[4867]: I0214 05:53:19.237878 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b27199a8-11ac-4e59-90b8-b42387dd6dd2/mysql-bootstrap/0.log" Feb 14 05:53:19 crc kubenswrapper[4867]: I0214 05:53:19.464194 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b27199a8-11ac-4e59-90b8-b42387dd6dd2/mysql-bootstrap/0.log" Feb 14 05:53:19 crc kubenswrapper[4867]: I0214 05:53:19.532127 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b27199a8-11ac-4e59-90b8-b42387dd6dd2/galera/0.log" Feb 14 05:53:19 crc kubenswrapper[4867]: I0214 05:53:19.603246 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b27199a8-11ac-4e59-90b8-b42387dd6dd2/galera/1.log" Feb 14 05:53:19 crc kubenswrapper[4867]: I0214 05:53:19.908100 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_6fdee887-8ecb-4c1e-8a88-0284fc050f0e/openstackclient/0.log" Feb 14 05:53:20 crc kubenswrapper[4867]: I0214 05:53:20.184564 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-7lpqj_16c28c0f-9310-4721-87cf-2d1bb88b5bba/ovn-controller/0.log" Feb 14 05:53:20 crc kubenswrapper[4867]: I0214 05:53:20.317479 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-4gz6p_43e8f5ec-ba3d-4962-97f1-2be3a087852e/openstack-network-exporter/0.log" Feb 14 05:53:20 crc kubenswrapper[4867]: I0214 05:53:20.529744 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dznst_6f356df8-0955-46c4-9166-2c1eef982399/ovsdb-server-init/0.log" Feb 14 05:53:20 crc kubenswrapper[4867]: I0214 05:53:20.559314 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3748198f-49fe-4a76-bd81-4ad518a594e8/nova-metadata-metadata/0.log" Feb 14 05:53:20 crc kubenswrapper[4867]: I0214 05:53:20.696147 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dznst_6f356df8-0955-46c4-9166-2c1eef982399/ovs-vswitchd/0.log" Feb 14 05:53:20 crc kubenswrapper[4867]: I0214 05:53:20.756900 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dznst_6f356df8-0955-46c4-9166-2c1eef982399/ovsdb-server-init/0.log" Feb 14 05:53:20 crc kubenswrapper[4867]: I0214 05:53:20.760566 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-dznst_6f356df8-0955-46c4-9166-2c1eef982399/ovsdb-server/0.log" Feb 14 05:53:20 crc kubenswrapper[4867]: I0214 05:53:20.968125 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-vjz5q_c3ef84d6-150a-46b1-8e93-7e650c8be1ef/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:21 crc kubenswrapper[4867]: I0214 05:53:21.027922 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0552eb77-2bc5-49dd-911e-f08071a83da9/openstack-network-exporter/0.log" Feb 14 05:53:21 crc kubenswrapper[4867]: I0214 05:53:21.104436 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_0552eb77-2bc5-49dd-911e-f08071a83da9/ovn-northd/0.log" Feb 14 05:53:21 crc kubenswrapper[4867]: I0214 05:53:21.274966 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_353b0cad-bb6a-4a68-b787-64fb7b32ee27/openstack-network-exporter/0.log" Feb 14 05:53:21 crc kubenswrapper[4867]: I0214 05:53:21.296090 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_353b0cad-bb6a-4a68-b787-64fb7b32ee27/ovsdbserver-nb/0.log" Feb 14 05:53:21 crc kubenswrapper[4867]: I0214 05:53:21.513665 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_9faf0052-6200-4ac5-9216-7a26a29f4508/openstack-network-exporter/0.log" Feb 14 05:53:21 crc kubenswrapper[4867]: I0214 05:53:21.549361 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_9faf0052-6200-4ac5-9216-7a26a29f4508/ovsdbserver-sb/0.log" Feb 14 05:53:21 crc kubenswrapper[4867]: I0214 05:53:21.822014 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-8574cd8bdd-r5cv6_2ef45c32-32a1-4302-84e3-3ff7e864cb99/placement-api/0.log" Feb 14 05:53:21 crc kubenswrapper[4867]: I0214 05:53:21.871330 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-8574cd8bdd-r5cv6_2ef45c32-32a1-4302-84e3-3ff7e864cb99/placement-log/0.log" Feb 14 05:53:21 crc kubenswrapper[4867]: I0214 05:53:21.899021 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8c8003cd-8992-4714-96a2-2e649aead118/init-config-reloader/0.log" Feb 14 05:53:22 crc kubenswrapper[4867]: I0214 05:53:22.095789 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8c8003cd-8992-4714-96a2-2e649aead118/init-config-reloader/0.log" Feb 14 05:53:22 crc kubenswrapper[4867]: I0214 05:53:22.151717 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8c8003cd-8992-4714-96a2-2e649aead118/thanos-sidecar/0.log" Feb 14 05:53:22 crc kubenswrapper[4867]: I0214 05:53:22.159446 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8c8003cd-8992-4714-96a2-2e649aead118/prometheus/0.log" Feb 14 05:53:22 crc kubenswrapper[4867]: I0214 05:53:22.204079 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8c8003cd-8992-4714-96a2-2e649aead118/config-reloader/0.log" Feb 14 05:53:22 crc kubenswrapper[4867]: I0214 05:53:22.414315 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c/setup-container/0.log" Feb 14 05:53:22 crc kubenswrapper[4867]: I0214 05:53:22.671473 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c/setup-container/0.log" Feb 14 05:53:22 crc kubenswrapper[4867]: I0214 05:53:22.760243 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_0901cb1a-f3c5-4eff-843b-cdb5c5c7a78c/rabbitmq/0.log" Feb 14 05:53:22 crc kubenswrapper[4867]: I0214 05:53:22.797821 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7e279860-a36f-473d-a79a-a34e5820e5a6/setup-container/0.log" Feb 14 05:53:23 crc kubenswrapper[4867]: I0214 05:53:23.203431 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7e279860-a36f-473d-a79a-a34e5820e5a6/setup-container/0.log" Feb 14 05:53:23 crc kubenswrapper[4867]: I0214 05:53:23.212835 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7e279860-a36f-473d-a79a-a34e5820e5a6/rabbitmq/0.log" Feb 14 05:53:23 crc kubenswrapper[4867]: I0214 05:53:23.299654 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_82f2a63e-b256-4ad7-96ee-1def8a174cfb/setup-container/0.log" Feb 14 05:53:23 crc kubenswrapper[4867]: I0214 05:53:23.591898 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_82f2a63e-b256-4ad7-96ee-1def8a174cfb/rabbitmq/0.log" Feb 14 05:53:23 crc kubenswrapper[4867]: I0214 05:53:23.627264 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_82f2a63e-b256-4ad7-96ee-1def8a174cfb/setup-container/0.log" Feb 14 05:53:23 crc kubenswrapper[4867]: I0214 05:53:23.693154 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_c8afa7ab-eaaa-4558-99d5-c655cf271f62/setup-container/0.log" Feb 14 05:53:23 crc kubenswrapper[4867]: I0214 05:53:23.852560 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_c8afa7ab-eaaa-4558-99d5-c655cf271f62/setup-container/0.log" Feb 14 05:53:23 crc kubenswrapper[4867]: I0214 05:53:23.923018 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_c8afa7ab-eaaa-4558-99d5-c655cf271f62/rabbitmq/0.log" Feb 14 05:53:24 crc kubenswrapper[4867]: I0214 05:53:24.010681 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-8zlml_4a0a98e3-261b-460d-92c2-4fce312f5171/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:24 crc kubenswrapper[4867]: I0214 05:53:24.243911 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-drcl6_0c240366-e845-4987-943c-afc965ddc2f4/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:24 crc kubenswrapper[4867]: I0214 05:53:24.361948 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-hwqcf_51f6e45c-a545-4b49-b6f8-a3048619f24d/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:24 crc kubenswrapper[4867]: I0214 05:53:24.528007 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-lsj48_764366f2-ea14-4cc9-a195-52ee347e666d/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:24 crc kubenswrapper[4867]: I0214 05:53:24.668563 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-5rl49_e72df4ca-d603-4f2e-9ff1-3ec392ef11b7/ssh-known-hosts-edpm-deployment/0.log" Feb 14 05:53:24 crc kubenswrapper[4867]: I0214 05:53:24.873881 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5559ff585f-sb7wb_76fdab94-9bfb-48b7-82f9-bdd6d2258cdb/proxy-server/0.log" Feb 14 05:53:25 crc kubenswrapper[4867]: I0214 05:53:25.052567 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-dc8sm_92f44db3-78d7-4707-af34-daf9f3bbc0bf/swift-ring-rebalance/0.log" Feb 14 05:53:25 crc kubenswrapper[4867]: I0214 05:53:25.056804 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5559ff585f-sb7wb_76fdab94-9bfb-48b7-82f9-bdd6d2258cdb/proxy-httpd/0.log" Feb 14 05:53:25 crc kubenswrapper[4867]: I0214 05:53:25.295342 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1d9f9909-1442-4d83-b2aa-0f58d4022338/account-auditor/0.log" Feb 14 05:53:25 crc kubenswrapper[4867]: I0214 05:53:25.327233 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1d9f9909-1442-4d83-b2aa-0f58d4022338/account-reaper/0.log" Feb 14 05:53:25 crc kubenswrapper[4867]: I0214 05:53:25.335709 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1d9f9909-1442-4d83-b2aa-0f58d4022338/account-replicator/0.log" Feb 14 05:53:25 crc kubenswrapper[4867]: I0214 05:53:25.492066 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1d9f9909-1442-4d83-b2aa-0f58d4022338/container-auditor/0.log" Feb 14 05:53:25 crc kubenswrapper[4867]: I0214 05:53:25.514108 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1d9f9909-1442-4d83-b2aa-0f58d4022338/account-server/0.log" Feb 14 05:53:25 crc kubenswrapper[4867]: I0214 05:53:25.667014 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1d9f9909-1442-4d83-b2aa-0f58d4022338/container-server/0.log" Feb 14 05:53:25 crc kubenswrapper[4867]: I0214 05:53:25.690235 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1d9f9909-1442-4d83-b2aa-0f58d4022338/container-replicator/0.log" Feb 14 05:53:25 crc kubenswrapper[4867]: I0214 05:53:25.790897 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1d9f9909-1442-4d83-b2aa-0f58d4022338/container-updater/0.log" Feb 14 05:53:25 crc kubenswrapper[4867]: I0214 05:53:25.830456 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1d9f9909-1442-4d83-b2aa-0f58d4022338/object-auditor/0.log" Feb 14 05:53:25 crc kubenswrapper[4867]: I0214 05:53:25.950329 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1d9f9909-1442-4d83-b2aa-0f58d4022338/object-expirer/0.log" Feb 14 05:53:26 crc kubenswrapper[4867]: I0214 05:53:26.027013 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1d9f9909-1442-4d83-b2aa-0f58d4022338/object-replicator/0.log" Feb 14 05:53:26 crc kubenswrapper[4867]: I0214 05:53:26.064053 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1d9f9909-1442-4d83-b2aa-0f58d4022338/object-server/0.log" Feb 14 05:53:26 crc kubenswrapper[4867]: I0214 05:53:26.138374 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1d9f9909-1442-4d83-b2aa-0f58d4022338/object-updater/0.log" Feb 14 05:53:26 crc kubenswrapper[4867]: I0214 05:53:26.259121 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1d9f9909-1442-4d83-b2aa-0f58d4022338/rsync/0.log" Feb 14 05:53:26 crc kubenswrapper[4867]: I0214 05:53:26.327106 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_1d9f9909-1442-4d83-b2aa-0f58d4022338/swift-recon-cron/0.log" Feb 14 05:53:26 crc kubenswrapper[4867]: I0214 05:53:26.508214 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-x8zqq_b70721c5-f29f-4cc4-8ee7-88341a81765d/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:26 crc kubenswrapper[4867]: I0214 05:53:26.738364 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-g8qps_43f6ac0f-9203-4827-bd57-acbae7793028/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:26 crc kubenswrapper[4867]: I0214 05:53:26.966566 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_be58ab35-1c46-426e-87a1-9010a643ead5/test-operator-logs-container/0.log" Feb 14 05:53:27 crc kubenswrapper[4867]: I0214 05:53:27.134663 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-sk5ns_6eaa68ce-0a13-47ec-b1d9-3a11bd50c4be/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.035443 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_a161c594-8af3-458f-911a-bbf51e7bfcdd/tempest-tests-tempest-tests-runner/0.log" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.234351 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_f1d6dceb-5ee5-407d-ade4-be35d128d8dc/memcached/0.log" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.473945 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wsjxv"] Feb 14 05:53:28 crc kubenswrapper[4867]: E0214 05:53:28.474437 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd017092-381d-4839-bd5f-b8177c576ab1" containerName="registry-server" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.474453 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd017092-381d-4839-bd5f-b8177c576ab1" containerName="registry-server" Feb 14 05:53:28 crc kubenswrapper[4867]: E0214 05:53:28.474489 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd017092-381d-4839-bd5f-b8177c576ab1" containerName="extract-content" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.474512 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd017092-381d-4839-bd5f-b8177c576ab1" containerName="extract-content" Feb 14 05:53:28 crc kubenswrapper[4867]: E0214 05:53:28.474531 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd017092-381d-4839-bd5f-b8177c576ab1" containerName="extract-utilities" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.474537 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd017092-381d-4839-bd5f-b8177c576ab1" containerName="extract-utilities" Feb 14 05:53:28 crc kubenswrapper[4867]: E0214 05:53:28.474571 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abe60a3f-52b5-45a9-8603-17020367713d" containerName="container-00" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.474577 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="abe60a3f-52b5-45a9-8603-17020367713d" containerName="container-00" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.474799 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd017092-381d-4839-bd5f-b8177c576ab1" containerName="registry-server" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.474815 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="abe60a3f-52b5-45a9-8603-17020367713d" containerName="container-00" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.477965 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.492269 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wsjxv"] Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.600302 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d55eb762-847d-4073-b20e-d1f306d0a424-catalog-content\") pod \"redhat-operators-wsjxv\" (UID: \"d55eb762-847d-4073-b20e-d1f306d0a424\") " pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.600869 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lx6j\" (UniqueName: \"kubernetes.io/projected/d55eb762-847d-4073-b20e-d1f306d0a424-kube-api-access-4lx6j\") pod \"redhat-operators-wsjxv\" (UID: \"d55eb762-847d-4073-b20e-d1f306d0a424\") " pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.601130 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d55eb762-847d-4073-b20e-d1f306d0a424-utilities\") pod \"redhat-operators-wsjxv\" (UID: \"d55eb762-847d-4073-b20e-d1f306d0a424\") " pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.703827 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d55eb762-847d-4073-b20e-d1f306d0a424-catalog-content\") pod \"redhat-operators-wsjxv\" (UID: \"d55eb762-847d-4073-b20e-d1f306d0a424\") " pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.704009 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lx6j\" (UniqueName: \"kubernetes.io/projected/d55eb762-847d-4073-b20e-d1f306d0a424-kube-api-access-4lx6j\") pod \"redhat-operators-wsjxv\" (UID: \"d55eb762-847d-4073-b20e-d1f306d0a424\") " pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.704110 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d55eb762-847d-4073-b20e-d1f306d0a424-utilities\") pod \"redhat-operators-wsjxv\" (UID: \"d55eb762-847d-4073-b20e-d1f306d0a424\") " pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.705529 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d55eb762-847d-4073-b20e-d1f306d0a424-catalog-content\") pod \"redhat-operators-wsjxv\" (UID: \"d55eb762-847d-4073-b20e-d1f306d0a424\") " pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.705680 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d55eb762-847d-4073-b20e-d1f306d0a424-utilities\") pod \"redhat-operators-wsjxv\" (UID: \"d55eb762-847d-4073-b20e-d1f306d0a424\") " pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.728060 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lx6j\" (UniqueName: \"kubernetes.io/projected/d55eb762-847d-4073-b20e-d1f306d0a424-kube-api-access-4lx6j\") pod \"redhat-operators-wsjxv\" (UID: \"d55eb762-847d-4073-b20e-d1f306d0a424\") " pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:53:28 crc kubenswrapper[4867]: I0214 05:53:28.856220 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:53:29 crc kubenswrapper[4867]: I0214 05:53:29.461338 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wsjxv"] Feb 14 05:53:29 crc kubenswrapper[4867]: I0214 05:53:29.646789 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsjxv" event={"ID":"d55eb762-847d-4073-b20e-d1f306d0a424","Type":"ContainerStarted","Data":"bdbd7570f641df51015aac2cfdcc57ae989a722bc97af32d68059ca55601be89"} Feb 14 05:53:30 crc kubenswrapper[4867]: I0214 05:53:30.658325 4867 generic.go:334] "Generic (PLEG): container finished" podID="d55eb762-847d-4073-b20e-d1f306d0a424" containerID="30d1013f7099577360605cbfb6563ff4f5ab0068b09bcb682df52799f6f02865" exitCode=0 Feb 14 05:53:30 crc kubenswrapper[4867]: I0214 05:53:30.658844 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsjxv" event={"ID":"d55eb762-847d-4073-b20e-d1f306d0a424","Type":"ContainerDied","Data":"30d1013f7099577360605cbfb6563ff4f5ab0068b09bcb682df52799f6f02865"} Feb 14 05:53:31 crc kubenswrapper[4867]: I0214 05:53:31.251271 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:53:31 crc kubenswrapper[4867]: I0214 05:53:31.251573 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:53:31 crc kubenswrapper[4867]: I0214 05:53:31.670407 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsjxv" event={"ID":"d55eb762-847d-4073-b20e-d1f306d0a424","Type":"ContainerStarted","Data":"7f9aa6a0d7a01fb7e025b11fbd0a7eb4577303eda11bc1442268815c89953f3f"} Feb 14 05:53:38 crc kubenswrapper[4867]: I0214 05:53:38.752094 4867 generic.go:334] "Generic (PLEG): container finished" podID="d55eb762-847d-4073-b20e-d1f306d0a424" containerID="7f9aa6a0d7a01fb7e025b11fbd0a7eb4577303eda11bc1442268815c89953f3f" exitCode=0 Feb 14 05:53:38 crc kubenswrapper[4867]: I0214 05:53:38.752744 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsjxv" event={"ID":"d55eb762-847d-4073-b20e-d1f306d0a424","Type":"ContainerDied","Data":"7f9aa6a0d7a01fb7e025b11fbd0a7eb4577303eda11bc1442268815c89953f3f"} Feb 14 05:53:39 crc kubenswrapper[4867]: I0214 05:53:39.766999 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsjxv" event={"ID":"d55eb762-847d-4073-b20e-d1f306d0a424","Type":"ContainerStarted","Data":"7217e257de5b0d565a1fcef5f665ca331c051276a1f2729401d6ffeea61a13c2"} Feb 14 05:53:39 crc kubenswrapper[4867]: I0214 05:53:39.787453 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wsjxv" podStartSLOduration=3.305384714 podStartE2EDuration="11.787439002s" podCreationTimestamp="2026-02-14 05:53:28 +0000 UTC" firstStartedPulling="2026-02-14 05:53:30.660693637 +0000 UTC m=+6242.741630951" lastFinishedPulling="2026-02-14 05:53:39.142747925 +0000 UTC m=+6251.223685239" observedRunningTime="2026-02-14 05:53:39.785575643 +0000 UTC m=+6251.866512957" watchObservedRunningTime="2026-02-14 05:53:39.787439002 +0000 UTC m=+6251.868376316" Feb 14 05:53:48 crc kubenswrapper[4867]: I0214 05:53:48.856691 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:53:48 crc kubenswrapper[4867]: I0214 05:53:48.857380 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:53:49 crc kubenswrapper[4867]: I0214 05:53:49.914465 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wsjxv" podUID="d55eb762-847d-4073-b20e-d1f306d0a424" containerName="registry-server" probeResult="failure" output=< Feb 14 05:53:49 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:53:49 crc kubenswrapper[4867]: > Feb 14 05:53:58 crc kubenswrapper[4867]: I0214 05:53:58.745335 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7_fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb/util/0.log" Feb 14 05:53:58 crc kubenswrapper[4867]: I0214 05:53:58.972235 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7_fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb/pull/0.log" Feb 14 05:53:59 crc kubenswrapper[4867]: I0214 05:53:59.004819 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7_fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb/pull/0.log" Feb 14 05:53:59 crc kubenswrapper[4867]: I0214 05:53:59.008867 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7_fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb/util/0.log" Feb 14 05:53:59 crc kubenswrapper[4867]: I0214 05:53:59.143496 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7_fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb/util/0.log" Feb 14 05:53:59 crc kubenswrapper[4867]: I0214 05:53:59.198995 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7_fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb/pull/0.log" Feb 14 05:53:59 crc kubenswrapper[4867]: I0214 05:53:59.248420 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_8c4df5843827cca9a4ba10f11751e86eb8b77e6cae3749237366ad3dfec8wq7_fc7263e4-82c8-4dd1-a5ad-2dc241d7f4cb/extract/0.log" Feb 14 05:53:59 crc kubenswrapper[4867]: I0214 05:53:59.762187 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-ndb8l_652d3b74-0634-4f8f-b5ef-3adfc53920eb/manager/0.log" Feb 14 05:53:59 crc kubenswrapper[4867]: I0214 05:53:59.925435 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wsjxv" podUID="d55eb762-847d-4073-b20e-d1f306d0a424" containerName="registry-server" probeResult="failure" output=< Feb 14 05:53:59 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:53:59 crc kubenswrapper[4867]: > Feb 14 05:54:00 crc kubenswrapper[4867]: I0214 05:54:00.204466 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-tpfxn_1f889f7b-8ae5-43e3-ab54-d3bf06c010df/manager/0.log" Feb 14 05:54:00 crc kubenswrapper[4867]: I0214 05:54:00.425572 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-jxpv2_185d4fd5-608b-48d8-8731-27e7a05adfe2/manager/0.log" Feb 14 05:54:00 crc kubenswrapper[4867]: I0214 05:54:00.762931 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-bgznq_4b75df5b-04e5-445f-8d2d-57c6cbe5971c/manager/0.log" Feb 14 05:54:01 crc kubenswrapper[4867]: I0214 05:54:01.250657 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:54:01 crc kubenswrapper[4867]: I0214 05:54:01.250976 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:54:01 crc kubenswrapper[4867]: I0214 05:54:01.251021 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 05:54:01 crc kubenswrapper[4867]: I0214 05:54:01.251984 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"969e0cb4cefe8b8e5046ee62cca830ff3afc22fe72785a6b708c487b9ff93b5e"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 05:54:01 crc kubenswrapper[4867]: I0214 05:54:01.252042 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://969e0cb4cefe8b8e5046ee62cca830ff3afc22fe72785a6b708c487b9ff93b5e" gracePeriod=600 Feb 14 05:54:01 crc kubenswrapper[4867]: I0214 05:54:01.558286 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-6nhjp_94ff35ef-77e1-4085-ad2f-837ebc666b2a/manager/1.log" Feb 14 05:54:01 crc kubenswrapper[4867]: E0214 05:54:01.626798 4867 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5992e46c_bce7_4b9f_82f2_c7ffb93286cd.slice/crio-969e0cb4cefe8b8e5046ee62cca830ff3afc22fe72785a6b708c487b9ff93b5e.scope\": RecentStats: unable to find data in memory cache]" Feb 14 05:54:01 crc kubenswrapper[4867]: I0214 05:54:01.810976 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-6nhjp_94ff35ef-77e1-4085-ad2f-837ebc666b2a/manager/0.log" Feb 14 05:54:01 crc kubenswrapper[4867]: I0214 05:54:01.993410 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-jqq2w_ebee5651-7233-4c18-bb97-a4dc91eabef4/manager/0.log" Feb 14 05:54:02 crc kubenswrapper[4867]: I0214 05:54:02.034162 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="969e0cb4cefe8b8e5046ee62cca830ff3afc22fe72785a6b708c487b9ff93b5e" exitCode=0 Feb 14 05:54:02 crc kubenswrapper[4867]: I0214 05:54:02.034203 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"969e0cb4cefe8b8e5046ee62cca830ff3afc22fe72785a6b708c487b9ff93b5e"} Feb 14 05:54:02 crc kubenswrapper[4867]: I0214 05:54:02.034255 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154"} Feb 14 05:54:02 crc kubenswrapper[4867]: I0214 05:54:02.034276 4867 scope.go:117] "RemoveContainer" containerID="57022a394f9e48e84c2c7ab708dd1c775f970a72e65d0163882f6edf72cdab37" Feb 14 05:54:02 crc kubenswrapper[4867]: I0214 05:54:02.483554 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-x7qx5_dc65ca0c-1d72-468f-b600-dfb8332bf4bd/manager/0.log" Feb 14 05:54:02 crc kubenswrapper[4867]: I0214 05:54:02.792785 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-8dzwp_6b5078d9-f30f-40a8-b5b5-8eb11271ec10/manager/0.log" Feb 14 05:54:03 crc kubenswrapper[4867]: I0214 05:54:03.126265 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-chbgl_3025ff58-4a91-43f5-8f15-94cadd0cef8b/manager/0.log" Feb 14 05:54:03 crc kubenswrapper[4867]: I0214 05:54:03.479893 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-wwm9m_7bb6de63-3c92-43de-a01b-b34df765aeba/manager/0.log" Feb 14 05:54:03 crc kubenswrapper[4867]: I0214 05:54:03.553219 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-2xwdd_38a9cdf3-42e2-4279-8092-af7e8c82bc51/manager/0.log" Feb 14 05:54:04 crc kubenswrapper[4867]: I0214 05:54:04.165712 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-tf6rg_74a43e5b-11c4-459d-bbc7-03aa03489f17/manager/0.log" Feb 14 05:54:04 crc kubenswrapper[4867]: I0214 05:54:04.423422 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t_634f9e2f-2100-49e3-a31f-a369cf8ff13f/manager/1.log" Feb 14 05:54:04 crc kubenswrapper[4867]: I0214 05:54:04.487019 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cs8b7t_634f9e2f-2100-49e3-a31f-a369cf8ff13f/manager/0.log" Feb 14 05:54:05 crc kubenswrapper[4867]: I0214 05:54:05.126892 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-6b9546c8f4-49lm8_10461723-ecff-48fe-a034-9a07bf3bf8f7/operator/0.log" Feb 14 05:54:05 crc kubenswrapper[4867]: I0214 05:54:05.549401 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-29mb7_b4bb205c-0469-49a0-b783-9b51ae11ddfe/registry-server/1.log" Feb 14 05:54:05 crc kubenswrapper[4867]: I0214 05:54:05.778010 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-7zkqz_64ff8480-2ca0-40d5-b5c9-448d0db3c575/manager/1.log" Feb 14 05:54:06 crc kubenswrapper[4867]: I0214 05:54:06.030649 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-29mb7_b4bb205c-0469-49a0-b783-9b51ae11ddfe/registry-server/0.log" Feb 14 05:54:06 crc kubenswrapper[4867]: I0214 05:54:06.394615 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-dszdp_ffb00aaf-6760-440e-827a-f795baf3693a/manager/0.log" Feb 14 05:54:06 crc kubenswrapper[4867]: I0214 05:54:06.754822 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-vwvtz_9ec66be5-3947-45d1-bf34-c7639e8d4c8a/manager/0.log" Feb 14 05:54:06 crc kubenswrapper[4867]: I0214 05:54:06.990498 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-87pdl_c38fa6a1-63b1-44a2-82b8-d6fd3d8a1f8d/operator/1.log" Feb 14 05:54:07 crc kubenswrapper[4867]: I0214 05:54:07.072831 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-87pdl_c38fa6a1-63b1-44a2-82b8-d6fd3d8a1f8d/operator/0.log" Feb 14 05:54:07 crc kubenswrapper[4867]: I0214 05:54:07.419403 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-snrw6_bc4bb4fd-bcc8-438b-af84-a2db3d3e346a/manager/0.log" Feb 14 05:54:07 crc kubenswrapper[4867]: I0214 05:54:07.895067 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-t7hwz_67e3f2b9-2dbf-4c35-b1cd-02be51f58e38/manager/0.log" Feb 14 05:54:07 crc kubenswrapper[4867]: I0214 05:54:07.904393 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-7zkqz_64ff8480-2ca0-40d5-b5c9-448d0db3c575/manager/0.log" Feb 14 05:54:08 crc kubenswrapper[4867]: I0214 05:54:08.192387 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-6d9jj_82e5dbee-ab1e-498c-9460-be75226afa18/manager/0.log" Feb 14 05:54:08 crc kubenswrapper[4867]: I0214 05:54:08.304607 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-75585db5cc-kzk25_c83fa345-043f-453c-b797-a00db3111d44/manager/0.log" Feb 14 05:54:08 crc kubenswrapper[4867]: I0214 05:54:08.360034 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-55dcdcc8d-49t56_d72a97fb-2a6a-4af1-8f0c-de88ab679119/manager/0.log" Feb 14 05:54:09 crc kubenswrapper[4867]: I0214 05:54:09.921084 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wsjxv" podUID="d55eb762-847d-4073-b20e-d1f306d0a424" containerName="registry-server" probeResult="failure" output=< Feb 14 05:54:09 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:54:09 crc kubenswrapper[4867]: > Feb 14 05:54:14 crc kubenswrapper[4867]: I0214 05:54:14.460994 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-pxm8d_66c8a0dd-f076-4994-bd42-39c80de83233/manager/0.log" Feb 14 05:54:19 crc kubenswrapper[4867]: I0214 05:54:19.915290 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wsjxv" podUID="d55eb762-847d-4073-b20e-d1f306d0a424" containerName="registry-server" probeResult="failure" output=< Feb 14 05:54:19 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 05:54:19 crc kubenswrapper[4867]: > Feb 14 05:54:28 crc kubenswrapper[4867]: I0214 05:54:28.909184 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:54:28 crc kubenswrapper[4867]: I0214 05:54:28.965438 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:54:29 crc kubenswrapper[4867]: I0214 05:54:29.694172 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wsjxv"] Feb 14 05:54:30 crc kubenswrapper[4867]: I0214 05:54:30.389009 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wsjxv" podUID="d55eb762-847d-4073-b20e-d1f306d0a424" containerName="registry-server" containerID="cri-o://7217e257de5b0d565a1fcef5f665ca331c051276a1f2729401d6ffeea61a13c2" gracePeriod=2 Feb 14 05:54:30 crc kubenswrapper[4867]: I0214 05:54:30.540312 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-f47sx_89db71f1-1a8b-4c57-9a3d-eb725060aee9/control-plane-machine-set-operator/0.log" Feb 14 05:54:30 crc kubenswrapper[4867]: I0214 05:54:30.824633 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-699tj_8437deca-adf5-4648-9abe-2c1c6376d07b/machine-api-operator/0.log" Feb 14 05:54:30 crc kubenswrapper[4867]: I0214 05:54:30.860451 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-699tj_8437deca-adf5-4648-9abe-2c1c6376d07b/kube-rbac-proxy/0.log" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.286143 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.314875 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lx6j\" (UniqueName: \"kubernetes.io/projected/d55eb762-847d-4073-b20e-d1f306d0a424-kube-api-access-4lx6j\") pod \"d55eb762-847d-4073-b20e-d1f306d0a424\" (UID: \"d55eb762-847d-4073-b20e-d1f306d0a424\") " Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.315069 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d55eb762-847d-4073-b20e-d1f306d0a424-utilities\") pod \"d55eb762-847d-4073-b20e-d1f306d0a424\" (UID: \"d55eb762-847d-4073-b20e-d1f306d0a424\") " Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.315330 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d55eb762-847d-4073-b20e-d1f306d0a424-catalog-content\") pod \"d55eb762-847d-4073-b20e-d1f306d0a424\" (UID: \"d55eb762-847d-4073-b20e-d1f306d0a424\") " Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.315615 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d55eb762-847d-4073-b20e-d1f306d0a424-utilities" (OuterVolumeSpecName: "utilities") pod "d55eb762-847d-4073-b20e-d1f306d0a424" (UID: "d55eb762-847d-4073-b20e-d1f306d0a424"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.315983 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d55eb762-847d-4073-b20e-d1f306d0a424-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.325878 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d55eb762-847d-4073-b20e-d1f306d0a424-kube-api-access-4lx6j" (OuterVolumeSpecName: "kube-api-access-4lx6j") pod "d55eb762-847d-4073-b20e-d1f306d0a424" (UID: "d55eb762-847d-4073-b20e-d1f306d0a424"). InnerVolumeSpecName "kube-api-access-4lx6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.403445 4867 generic.go:334] "Generic (PLEG): container finished" podID="d55eb762-847d-4073-b20e-d1f306d0a424" containerID="7217e257de5b0d565a1fcef5f665ca331c051276a1f2729401d6ffeea61a13c2" exitCode=0 Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.403550 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wsjxv" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.403597 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsjxv" event={"ID":"d55eb762-847d-4073-b20e-d1f306d0a424","Type":"ContainerDied","Data":"7217e257de5b0d565a1fcef5f665ca331c051276a1f2729401d6ffeea61a13c2"} Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.403656 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wsjxv" event={"ID":"d55eb762-847d-4073-b20e-d1f306d0a424","Type":"ContainerDied","Data":"bdbd7570f641df51015aac2cfdcc57ae989a722bc97af32d68059ca55601be89"} Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.403677 4867 scope.go:117] "RemoveContainer" containerID="7217e257de5b0d565a1fcef5f665ca331c051276a1f2729401d6ffeea61a13c2" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.416998 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lx6j\" (UniqueName: \"kubernetes.io/projected/d55eb762-847d-4073-b20e-d1f306d0a424-kube-api-access-4lx6j\") on node \"crc\" DevicePath \"\"" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.430909 4867 scope.go:117] "RemoveContainer" containerID="7f9aa6a0d7a01fb7e025b11fbd0a7eb4577303eda11bc1442268815c89953f3f" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.452829 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d55eb762-847d-4073-b20e-d1f306d0a424-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d55eb762-847d-4073-b20e-d1f306d0a424" (UID: "d55eb762-847d-4073-b20e-d1f306d0a424"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.468092 4867 scope.go:117] "RemoveContainer" containerID="30d1013f7099577360605cbfb6563ff4f5ab0068b09bcb682df52799f6f02865" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.524981 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d55eb762-847d-4073-b20e-d1f306d0a424-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.547133 4867 scope.go:117] "RemoveContainer" containerID="7217e257de5b0d565a1fcef5f665ca331c051276a1f2729401d6ffeea61a13c2" Feb 14 05:54:31 crc kubenswrapper[4867]: E0214 05:54:31.547709 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7217e257de5b0d565a1fcef5f665ca331c051276a1f2729401d6ffeea61a13c2\": container with ID starting with 7217e257de5b0d565a1fcef5f665ca331c051276a1f2729401d6ffeea61a13c2 not found: ID does not exist" containerID="7217e257de5b0d565a1fcef5f665ca331c051276a1f2729401d6ffeea61a13c2" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.547744 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7217e257de5b0d565a1fcef5f665ca331c051276a1f2729401d6ffeea61a13c2"} err="failed to get container status \"7217e257de5b0d565a1fcef5f665ca331c051276a1f2729401d6ffeea61a13c2\": rpc error: code = NotFound desc = could not find container \"7217e257de5b0d565a1fcef5f665ca331c051276a1f2729401d6ffeea61a13c2\": container with ID starting with 7217e257de5b0d565a1fcef5f665ca331c051276a1f2729401d6ffeea61a13c2 not found: ID does not exist" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.547765 4867 scope.go:117] "RemoveContainer" containerID="7f9aa6a0d7a01fb7e025b11fbd0a7eb4577303eda11bc1442268815c89953f3f" Feb 14 05:54:31 crc kubenswrapper[4867]: E0214 05:54:31.548034 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f9aa6a0d7a01fb7e025b11fbd0a7eb4577303eda11bc1442268815c89953f3f\": container with ID starting with 7f9aa6a0d7a01fb7e025b11fbd0a7eb4577303eda11bc1442268815c89953f3f not found: ID does not exist" containerID="7f9aa6a0d7a01fb7e025b11fbd0a7eb4577303eda11bc1442268815c89953f3f" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.548072 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f9aa6a0d7a01fb7e025b11fbd0a7eb4577303eda11bc1442268815c89953f3f"} err="failed to get container status \"7f9aa6a0d7a01fb7e025b11fbd0a7eb4577303eda11bc1442268815c89953f3f\": rpc error: code = NotFound desc = could not find container \"7f9aa6a0d7a01fb7e025b11fbd0a7eb4577303eda11bc1442268815c89953f3f\": container with ID starting with 7f9aa6a0d7a01fb7e025b11fbd0a7eb4577303eda11bc1442268815c89953f3f not found: ID does not exist" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.548091 4867 scope.go:117] "RemoveContainer" containerID="30d1013f7099577360605cbfb6563ff4f5ab0068b09bcb682df52799f6f02865" Feb 14 05:54:31 crc kubenswrapper[4867]: E0214 05:54:31.548379 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30d1013f7099577360605cbfb6563ff4f5ab0068b09bcb682df52799f6f02865\": container with ID starting with 30d1013f7099577360605cbfb6563ff4f5ab0068b09bcb682df52799f6f02865 not found: ID does not exist" containerID="30d1013f7099577360605cbfb6563ff4f5ab0068b09bcb682df52799f6f02865" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.548452 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30d1013f7099577360605cbfb6563ff4f5ab0068b09bcb682df52799f6f02865"} err="failed to get container status \"30d1013f7099577360605cbfb6563ff4f5ab0068b09bcb682df52799f6f02865\": rpc error: code = NotFound desc = could not find container \"30d1013f7099577360605cbfb6563ff4f5ab0068b09bcb682df52799f6f02865\": container with ID starting with 30d1013f7099577360605cbfb6563ff4f5ab0068b09bcb682df52799f6f02865 not found: ID does not exist" Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.750837 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wsjxv"] Feb 14 05:54:31 crc kubenswrapper[4867]: I0214 05:54:31.765675 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wsjxv"] Feb 14 05:54:33 crc kubenswrapper[4867]: I0214 05:54:33.010125 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d55eb762-847d-4073-b20e-d1f306d0a424" path="/var/lib/kubelet/pods/d55eb762-847d-4073-b20e-d1f306d0a424/volumes" Feb 14 05:54:43 crc kubenswrapper[4867]: I0214 05:54:43.814952 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-gslqt_1f305679-0f4d-440e-a053-7b3627eaae9c/cert-manager-controller/0.log" Feb 14 05:54:44 crc kubenswrapper[4867]: I0214 05:54:44.023678 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-s4258_2224c85e-13be-400d-abf8-6b412d8c55ee/cert-manager-cainjector/0.log" Feb 14 05:54:44 crc kubenswrapper[4867]: I0214 05:54:44.062904 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-xlg4t_34f53dfe-4707-4a5c-8745-c4ed944c6a6a/cert-manager-webhook/0.log" Feb 14 05:54:58 crc kubenswrapper[4867]: I0214 05:54:58.593057 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-xwq77_bd1547ee-0518-45af-bb63-9001da6fa7de/nmstate-console-plugin/0.log" Feb 14 05:54:58 crc kubenswrapper[4867]: I0214 05:54:58.796861 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-k6p82_ee9c78b0-77e6-47b0-8e8b-763d69cbd9aa/nmstate-handler/0.log" Feb 14 05:54:58 crc kubenswrapper[4867]: I0214 05:54:58.859325 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-57gj6_c9fcfe59-df8c-4433-a47f-8b07f90d98bc/kube-rbac-proxy/0.log" Feb 14 05:54:58 crc kubenswrapper[4867]: I0214 05:54:58.931348 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-57gj6_c9fcfe59-df8c-4433-a47f-8b07f90d98bc/nmstate-metrics/0.log" Feb 14 05:54:58 crc kubenswrapper[4867]: I0214 05:54:58.992897 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-tjfgz_914b3f92-c030-4d1e-8454-96a7220f851e/nmstate-operator/0.log" Feb 14 05:54:59 crc kubenswrapper[4867]: I0214 05:54:59.195624 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-khbvf_fdb6e297-9da3-41ff-a6f3-de81833178c8/nmstate-webhook/0.log" Feb 14 05:55:14 crc kubenswrapper[4867]: I0214 05:55:14.263042 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5479889c99-ltnxf_4a918644-d451-4f71-8a69-627b0de1ebb7/manager/1.log" Feb 14 05:55:14 crc kubenswrapper[4867]: I0214 05:55:14.308092 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5479889c99-ltnxf_4a918644-d451-4f71-8a69-627b0de1ebb7/kube-rbac-proxy/0.log" Feb 14 05:55:14 crc kubenswrapper[4867]: I0214 05:55:14.445539 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5479889c99-ltnxf_4a918644-d451-4f71-8a69-627b0de1ebb7/manager/0.log" Feb 14 05:55:30 crc kubenswrapper[4867]: I0214 05:55:30.244819 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_5ecc414b-6bac-4b24-99c5-e2d1fb67f314/prometheus-operator-admission-webhook/0.log" Feb 14 05:55:30 crc kubenswrapper[4867]: I0214 05:55:30.247432 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06/prometheus-operator-admission-webhook/0.log" Feb 14 05:55:30 crc kubenswrapper[4867]: I0214 05:55:30.269925 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-vwlcr_987816d4-f9a4-47da-983c-317f9a3f4d86/prometheus-operator/0.log" Feb 14 05:55:30 crc kubenswrapper[4867]: I0214 05:55:30.771383 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-kv4j7_94f47db9-4437-4b3e-aee5-f6f65e715e62/operator/0.log" Feb 14 05:55:30 crc kubenswrapper[4867]: I0214 05:55:30.868445 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-492b9_701367b7-aef6-43b5-a0f9-3a91206962de/observability-ui-dashboards/0.log" Feb 14 05:55:31 crc kubenswrapper[4867]: I0214 05:55:31.002775 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-7qfh9_31f03187-50f6-4015-afdc-422455a63006/perses-operator/0.log" Feb 14 05:55:46 crc kubenswrapper[4867]: I0214 05:55:46.999157 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-c769fd969-pmdnk_89b20edb-1b24-48e1-accf-f0a2b65c8da1/cluster-logging-operator/0.log" Feb 14 05:55:47 crc kubenswrapper[4867]: I0214 05:55:47.241235 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-4tm7t_0b309a8c-060a-4e8b-9731-3c4c3aab56f7/collector/0.log" Feb 14 05:55:47 crc kubenswrapper[4867]: I0214 05:55:47.254676 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_6975f95f-884b-4952-8bf8-0d18537e3403/loki-compactor/0.log" Feb 14 05:55:47 crc kubenswrapper[4867]: I0214 05:55:47.472217 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5d5548c9f5-7zdqp_c9201352-8585-47d4-9c13-b9e21ac4cd9f/loki-distributor/0.log" Feb 14 05:55:47 crc kubenswrapper[4867]: I0214 05:55:47.506101 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-767ffcbf75-l82l4_0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5/gateway/0.log" Feb 14 05:55:47 crc kubenswrapper[4867]: I0214 05:55:47.605947 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-767ffcbf75-l82l4_0c1f86e8-fb7b-40a7-9cc7-07bc9aa74ce5/opa/0.log" Feb 14 05:55:47 crc kubenswrapper[4867]: I0214 05:55:47.695406 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-767ffcbf75-md7ts_d28844dc-6974-446b-bd9a-b22586858387/opa/0.log" Feb 14 05:55:47 crc kubenswrapper[4867]: I0214 05:55:47.695772 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-767ffcbf75-md7ts_d28844dc-6974-446b-bd9a-b22586858387/gateway/0.log" Feb 14 05:55:47 crc kubenswrapper[4867]: I0214 05:55:47.861899 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_3c3333e0-ec4e-41bf-8296-9469ad3ac9cd/loki-index-gateway/0.log" Feb 14 05:55:47 crc kubenswrapper[4867]: I0214 05:55:47.982878 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_775ca902-fd03-4191-9440-ea598768d4e6/loki-ingester/0.log" Feb 14 05:55:48 crc kubenswrapper[4867]: I0214 05:55:48.106182 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76bf7b6d45-5td7f_9c48c070-b4b3-48af-b40a-d82788f764d9/loki-querier/0.log" Feb 14 05:55:48 crc kubenswrapper[4867]: I0214 05:55:48.224027 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-6d6859c548-cfcbp_837b4fe4-f827-4882-8af7-225b18bb3e22/loki-query-frontend/0.log" Feb 14 05:56:01 crc kubenswrapper[4867]: I0214 05:56:01.250975 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:56:01 crc kubenswrapper[4867]: I0214 05:56:01.251914 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:56:04 crc kubenswrapper[4867]: I0214 05:56:04.775094 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-zhmxc_516cf204-1263-431e-a450-039739b0d925/kube-rbac-proxy/0.log" Feb 14 05:56:04 crc kubenswrapper[4867]: I0214 05:56:04.779545 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-zhmxc_516cf204-1263-431e-a450-039739b0d925/controller/0.log" Feb 14 05:56:05 crc kubenswrapper[4867]: I0214 05:56:05.021658 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/cp-frr-files/0.log" Feb 14 05:56:05 crc kubenswrapper[4867]: I0214 05:56:05.534412 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/cp-reloader/0.log" Feb 14 05:56:05 crc kubenswrapper[4867]: I0214 05:56:05.549967 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/cp-reloader/0.log" Feb 14 05:56:05 crc kubenswrapper[4867]: I0214 05:56:05.571447 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/cp-metrics/0.log" Feb 14 05:56:05 crc kubenswrapper[4867]: I0214 05:56:05.582367 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/cp-frr-files/0.log" Feb 14 05:56:05 crc kubenswrapper[4867]: I0214 05:56:05.831828 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/cp-metrics/0.log" Feb 14 05:56:05 crc kubenswrapper[4867]: I0214 05:56:05.841591 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/cp-reloader/0.log" Feb 14 05:56:05 crc kubenswrapper[4867]: I0214 05:56:05.859261 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/cp-frr-files/0.log" Feb 14 05:56:05 crc kubenswrapper[4867]: I0214 05:56:05.873068 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/cp-metrics/0.log" Feb 14 05:56:06 crc kubenswrapper[4867]: I0214 05:56:06.064383 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/cp-reloader/0.log" Feb 14 05:56:06 crc kubenswrapper[4867]: I0214 05:56:06.081317 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/cp-metrics/0.log" Feb 14 05:56:06 crc kubenswrapper[4867]: I0214 05:56:06.140135 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/cp-frr-files/0.log" Feb 14 05:56:06 crc kubenswrapper[4867]: I0214 05:56:06.160227 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/controller/0.log" Feb 14 05:56:06 crc kubenswrapper[4867]: I0214 05:56:06.402230 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/kube-rbac-proxy/0.log" Feb 14 05:56:06 crc kubenswrapper[4867]: I0214 05:56:06.472802 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/frr-metrics/0.log" Feb 14 05:56:06 crc kubenswrapper[4867]: I0214 05:56:06.629880 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/frr/1.log" Feb 14 05:56:06 crc kubenswrapper[4867]: I0214 05:56:06.710368 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/kube-rbac-proxy-frr/0.log" Feb 14 05:56:06 crc kubenswrapper[4867]: I0214 05:56:06.735934 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/reloader/0.log" Feb 14 05:56:07 crc kubenswrapper[4867]: I0214 05:56:07.003131 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-9gqfb_85e0628d-4132-4c09-9da0-35db43024c9c/frr-k8s-webhook-server/0.log" Feb 14 05:56:07 crc kubenswrapper[4867]: I0214 05:56:07.111328 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-9gqfb_85e0628d-4132-4c09-9da0-35db43024c9c/frr-k8s-webhook-server/1.log" Feb 14 05:56:07 crc kubenswrapper[4867]: I0214 05:56:07.410869 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-67594686f4-52kwb_e1d5f0bd-4e8c-45c7-9d4e-c530689948ad/manager/1.log" Feb 14 05:56:07 crc kubenswrapper[4867]: I0214 05:56:07.529034 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-67594686f4-52kwb_e1d5f0bd-4e8c-45c7-9d4e-c530689948ad/manager/0.log" Feb 14 05:56:07 crc kubenswrapper[4867]: I0214 05:56:07.732334 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7f9bfb45cb-mpxbn_d5e9c930-96ca-4a35-af4f-b8ae033469a5/webhook-server/1.log" Feb 14 05:56:07 crc kubenswrapper[4867]: I0214 05:56:07.786767 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7f9bfb45cb-mpxbn_d5e9c930-96ca-4a35-af4f-b8ae033469a5/webhook-server/0.log" Feb 14 05:56:08 crc kubenswrapper[4867]: I0214 05:56:08.005678 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-4hvw7_6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8/kube-rbac-proxy/0.log" Feb 14 05:56:08 crc kubenswrapper[4867]: I0214 05:56:08.302391 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-nzdwg_cfde5532-97c7-47b8-8b63-0159fc9e82b9/frr/0.log" Feb 14 05:56:08 crc kubenswrapper[4867]: I0214 05:56:08.419799 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-4hvw7_6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8/speaker/1.log" Feb 14 05:56:08 crc kubenswrapper[4867]: I0214 05:56:08.780924 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-4hvw7_6e0a7a97-9ea6-4dcf-85a4-995d891fa5f8/speaker/0.log" Feb 14 05:56:21 crc kubenswrapper[4867]: I0214 05:56:21.902123 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv_936b69da-ce28-43de-8fcf-82e83936de1b/util/0.log" Feb 14 05:56:22 crc kubenswrapper[4867]: I0214 05:56:22.100469 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv_936b69da-ce28-43de-8fcf-82e83936de1b/util/0.log" Feb 14 05:56:22 crc kubenswrapper[4867]: I0214 05:56:22.108913 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv_936b69da-ce28-43de-8fcf-82e83936de1b/pull/0.log" Feb 14 05:56:22 crc kubenswrapper[4867]: I0214 05:56:22.160421 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv_936b69da-ce28-43de-8fcf-82e83936de1b/pull/0.log" Feb 14 05:56:22 crc kubenswrapper[4867]: I0214 05:56:22.346991 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv_936b69da-ce28-43de-8fcf-82e83936de1b/util/0.log" Feb 14 05:56:22 crc kubenswrapper[4867]: I0214 05:56:22.370982 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv_936b69da-ce28-43de-8fcf-82e83936de1b/pull/0.log" Feb 14 05:56:22 crc kubenswrapper[4867]: I0214 05:56:22.385460 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19fjnlv_936b69da-ce28-43de-8fcf-82e83936de1b/extract/0.log" Feb 14 05:56:22 crc kubenswrapper[4867]: I0214 05:56:22.536093 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc_2d5a082b-f5f1-4a9d-be2a-31df6953a4a4/util/0.log" Feb 14 05:56:22 crc kubenswrapper[4867]: I0214 05:56:22.743980 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc_2d5a082b-f5f1-4a9d-be2a-31df6953a4a4/pull/0.log" Feb 14 05:56:22 crc kubenswrapper[4867]: I0214 05:56:22.770737 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc_2d5a082b-f5f1-4a9d-be2a-31df6953a4a4/util/0.log" Feb 14 05:56:22 crc kubenswrapper[4867]: I0214 05:56:22.792582 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc_2d5a082b-f5f1-4a9d-be2a-31df6953a4a4/pull/0.log" Feb 14 05:56:22 crc kubenswrapper[4867]: I0214 05:56:22.965986 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc_2d5a082b-f5f1-4a9d-be2a-31df6953a4a4/pull/0.log" Feb 14 05:56:22 crc kubenswrapper[4867]: I0214 05:56:22.980086 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc_2d5a082b-f5f1-4a9d-be2a-31df6953a4a4/util/0.log" Feb 14 05:56:22 crc kubenswrapper[4867]: I0214 05:56:22.980330 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0859vdc_2d5a082b-f5f1-4a9d-be2a-31df6953a4a4/extract/0.log" Feb 14 05:56:23 crc kubenswrapper[4867]: I0214 05:56:23.195604 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn_cc14a3a2-05fa-4675-bace-02675c564e5f/util/0.log" Feb 14 05:56:23 crc kubenswrapper[4867]: I0214 05:56:23.328745 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn_cc14a3a2-05fa-4675-bace-02675c564e5f/util/0.log" Feb 14 05:56:23 crc kubenswrapper[4867]: I0214 05:56:23.329803 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn_cc14a3a2-05fa-4675-bace-02675c564e5f/pull/0.log" Feb 14 05:56:23 crc kubenswrapper[4867]: I0214 05:56:23.378077 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn_cc14a3a2-05fa-4675-bace-02675c564e5f/pull/0.log" Feb 14 05:56:23 crc kubenswrapper[4867]: I0214 05:56:23.556255 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn_cc14a3a2-05fa-4675-bace-02675c564e5f/extract/0.log" Feb 14 05:56:23 crc kubenswrapper[4867]: I0214 05:56:23.577122 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn_cc14a3a2-05fa-4675-bace-02675c564e5f/pull/0.log" Feb 14 05:56:23 crc kubenswrapper[4867]: I0214 05:56:23.608876 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213vbtkn_cc14a3a2-05fa-4675-bace-02675c564e5f/util/0.log" Feb 14 05:56:23 crc kubenswrapper[4867]: I0214 05:56:23.768798 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mrccv_e0fe6db4-add0-4993-a40c-c5b6725565fa/extract-utilities/0.log" Feb 14 05:56:23 crc kubenswrapper[4867]: I0214 05:56:23.935737 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mrccv_e0fe6db4-add0-4993-a40c-c5b6725565fa/extract-utilities/0.log" Feb 14 05:56:23 crc kubenswrapper[4867]: I0214 05:56:23.983854 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mrccv_e0fe6db4-add0-4993-a40c-c5b6725565fa/extract-content/0.log" Feb 14 05:56:23 crc kubenswrapper[4867]: I0214 05:56:23.995119 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mrccv_e0fe6db4-add0-4993-a40c-c5b6725565fa/extract-content/0.log" Feb 14 05:56:24 crc kubenswrapper[4867]: I0214 05:56:24.132014 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mrccv_e0fe6db4-add0-4993-a40c-c5b6725565fa/extract-utilities/0.log" Feb 14 05:56:24 crc kubenswrapper[4867]: I0214 05:56:24.135111 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mrccv_e0fe6db4-add0-4993-a40c-c5b6725565fa/extract-content/0.log" Feb 14 05:56:24 crc kubenswrapper[4867]: I0214 05:56:24.481932 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w69fq_be125812-eeef-4043-bef9-fea01037dddb/extract-utilities/0.log" Feb 14 05:56:24 crc kubenswrapper[4867]: I0214 05:56:24.547948 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mrccv_e0fe6db4-add0-4993-a40c-c5b6725565fa/registry-server/1.log" Feb 14 05:56:24 crc kubenswrapper[4867]: I0214 05:56:24.742697 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w69fq_be125812-eeef-4043-bef9-fea01037dddb/extract-utilities/0.log" Feb 14 05:56:24 crc kubenswrapper[4867]: I0214 05:56:24.771434 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w69fq_be125812-eeef-4043-bef9-fea01037dddb/extract-content/0.log" Feb 14 05:56:24 crc kubenswrapper[4867]: I0214 05:56:24.798316 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w69fq_be125812-eeef-4043-bef9-fea01037dddb/extract-content/0.log" Feb 14 05:56:24 crc kubenswrapper[4867]: I0214 05:56:24.929276 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mrccv_e0fe6db4-add0-4993-a40c-c5b6725565fa/registry-server/0.log" Feb 14 05:56:24 crc kubenswrapper[4867]: I0214 05:56:24.966007 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w69fq_be125812-eeef-4043-bef9-fea01037dddb/extract-utilities/0.log" Feb 14 05:56:25 crc kubenswrapper[4867]: I0214 05:56:25.025830 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w69fq_be125812-eeef-4043-bef9-fea01037dddb/extract-content/0.log" Feb 14 05:56:25 crc kubenswrapper[4867]: I0214 05:56:25.291096 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j_af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe/util/0.log" Feb 14 05:56:25 crc kubenswrapper[4867]: I0214 05:56:25.511131 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j_af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe/pull/0.log" Feb 14 05:56:25 crc kubenswrapper[4867]: I0214 05:56:25.548076 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j_af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe/pull/0.log" Feb 14 05:56:25 crc kubenswrapper[4867]: I0214 05:56:25.563373 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j_af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe/util/0.log" Feb 14 05:56:25 crc kubenswrapper[4867]: I0214 05:56:25.856982 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j_af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe/pull/0.log" Feb 14 05:56:25 crc kubenswrapper[4867]: I0214 05:56:25.890885 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j_af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe/util/0.log" Feb 14 05:56:25 crc kubenswrapper[4867]: I0214 05:56:25.921128 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989kxs9j_af62ec3e-1c1b-400e-bdb9-ba34fc8ef5fe/extract/0.log" Feb 14 05:56:26 crc kubenswrapper[4867]: I0214 05:56:26.052228 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-w69fq_be125812-eeef-4043-bef9-fea01037dddb/registry-server/0.log" Feb 14 05:56:26 crc kubenswrapper[4867]: I0214 05:56:26.147224 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb_10159ab6-8862-4a8a-afd2-3fb5920f2cae/util/0.log" Feb 14 05:56:26 crc kubenswrapper[4867]: I0214 05:56:26.300848 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb_10159ab6-8862-4a8a-afd2-3fb5920f2cae/pull/0.log" Feb 14 05:56:26 crc kubenswrapper[4867]: I0214 05:56:26.322290 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb_10159ab6-8862-4a8a-afd2-3fb5920f2cae/pull/0.log" Feb 14 05:56:26 crc kubenswrapper[4867]: I0214 05:56:26.334852 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb_10159ab6-8862-4a8a-afd2-3fb5920f2cae/util/0.log" Feb 14 05:56:26 crc kubenswrapper[4867]: I0214 05:56:26.509955 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb_10159ab6-8862-4a8a-afd2-3fb5920f2cae/util/0.log" Feb 14 05:56:26 crc kubenswrapper[4867]: I0214 05:56:26.510498 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb_10159ab6-8862-4a8a-afd2-3fb5920f2cae/extract/0.log" Feb 14 05:56:26 crc kubenswrapper[4867]: I0214 05:56:26.528087 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecadnhlb_10159ab6-8862-4a8a-afd2-3fb5920f2cae/pull/0.log" Feb 14 05:56:26 crc kubenswrapper[4867]: I0214 05:56:26.557728 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-p82xp_33b576d8-f768-4fd2-895d-7d4ababe8714/marketplace-operator/0.log" Feb 14 05:56:26 crc kubenswrapper[4867]: I0214 05:56:26.703595 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gbz8c_c8fe62eb-932d-4b17-8ffa-6c90780bdd74/extract-utilities/0.log" Feb 14 05:56:26 crc kubenswrapper[4867]: I0214 05:56:26.921106 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gbz8c_c8fe62eb-932d-4b17-8ffa-6c90780bdd74/extract-utilities/0.log" Feb 14 05:56:26 crc kubenswrapper[4867]: I0214 05:56:26.927170 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gbz8c_c8fe62eb-932d-4b17-8ffa-6c90780bdd74/extract-content/0.log" Feb 14 05:56:26 crc kubenswrapper[4867]: I0214 05:56:26.935832 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gbz8c_c8fe62eb-932d-4b17-8ffa-6c90780bdd74/extract-content/0.log" Feb 14 05:56:27 crc kubenswrapper[4867]: I0214 05:56:27.126527 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gbz8c_c8fe62eb-932d-4b17-8ffa-6c90780bdd74/extract-content/0.log" Feb 14 05:56:27 crc kubenswrapper[4867]: I0214 05:56:27.130172 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gbz8c_c8fe62eb-932d-4b17-8ffa-6c90780bdd74/extract-utilities/0.log" Feb 14 05:56:27 crc kubenswrapper[4867]: I0214 05:56:27.204087 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bvb8v_140d0152-99c5-425c-b956-595dea337206/extract-utilities/0.log" Feb 14 05:56:27 crc kubenswrapper[4867]: I0214 05:56:27.362233 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gbz8c_c8fe62eb-932d-4b17-8ffa-6c90780bdd74/registry-server/0.log" Feb 14 05:56:27 crc kubenswrapper[4867]: I0214 05:56:27.388079 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bvb8v_140d0152-99c5-425c-b956-595dea337206/extract-utilities/0.log" Feb 14 05:56:27 crc kubenswrapper[4867]: I0214 05:56:27.390711 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bvb8v_140d0152-99c5-425c-b956-595dea337206/extract-content/0.log" Feb 14 05:56:27 crc kubenswrapper[4867]: I0214 05:56:27.400206 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bvb8v_140d0152-99c5-425c-b956-595dea337206/extract-content/0.log" Feb 14 05:56:27 crc kubenswrapper[4867]: I0214 05:56:27.562477 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bvb8v_140d0152-99c5-425c-b956-595dea337206/extract-content/0.log" Feb 14 05:56:27 crc kubenswrapper[4867]: I0214 05:56:27.567534 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bvb8v_140d0152-99c5-425c-b956-595dea337206/extract-utilities/0.log" Feb 14 05:56:28 crc kubenswrapper[4867]: I0214 05:56:28.599618 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bvb8v_140d0152-99c5-425c-b956-595dea337206/registry-server/0.log" Feb 14 05:56:31 crc kubenswrapper[4867]: I0214 05:56:31.250858 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:56:31 crc kubenswrapper[4867]: I0214 05:56:31.251416 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:56:40 crc kubenswrapper[4867]: I0214 05:56:40.485673 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56b9d9b8d-fmcqr_5ecc414b-6bac-4b24-99c5-e2d1fb67f314/prometheus-operator-admission-webhook/0.log" Feb 14 05:56:40 crc kubenswrapper[4867]: I0214 05:56:40.486967 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-vwlcr_987816d4-f9a4-47da-983c-317f9a3f4d86/prometheus-operator/0.log" Feb 14 05:56:40 crc kubenswrapper[4867]: I0214 05:56:40.517060 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-56b9d9b8d-rk4gj_8c7f9ea9-2c5c-4e9c-97b2-02dd8a216d06/prometheus-operator-admission-webhook/0.log" Feb 14 05:56:40 crc kubenswrapper[4867]: I0214 05:56:40.615690 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-kv4j7_94f47db9-4437-4b3e-aee5-f6f65e715e62/operator/0.log" Feb 14 05:56:40 crc kubenswrapper[4867]: I0214 05:56:40.675834 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-7qfh9_31f03187-50f6-4015-afdc-422455a63006/perses-operator/0.log" Feb 14 05:56:40 crc kubenswrapper[4867]: I0214 05:56:40.701350 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-492b9_701367b7-aef6-43b5-a0f9-3a91206962de/observability-ui-dashboards/0.log" Feb 14 05:56:54 crc kubenswrapper[4867]: I0214 05:56:54.793380 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5479889c99-ltnxf_4a918644-d451-4f71-8a69-627b0de1ebb7/kube-rbac-proxy/0.log" Feb 14 05:56:54 crc kubenswrapper[4867]: I0214 05:56:54.824142 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5479889c99-ltnxf_4a918644-d451-4f71-8a69-627b0de1ebb7/manager/0.log" Feb 14 05:56:54 crc kubenswrapper[4867]: I0214 05:56:54.859245 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5479889c99-ltnxf_4a918644-d451-4f71-8a69-627b0de1ebb7/manager/1.log" Feb 14 05:57:01 crc kubenswrapper[4867]: I0214 05:57:01.251017 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 05:57:01 crc kubenswrapper[4867]: I0214 05:57:01.251584 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 14 05:57:01 crc kubenswrapper[4867]: I0214 05:57:01.251630 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" Feb 14 05:57:01 crc kubenswrapper[4867]: I0214 05:57:01.252570 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154"} pod="openshift-machine-config-operator/machine-config-daemon-4s95t" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 14 05:57:01 crc kubenswrapper[4867]: I0214 05:57:01.252622 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" containerID="cri-o://85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" gracePeriod=600 Feb 14 05:57:01 crc kubenswrapper[4867]: E0214 05:57:01.379386 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:57:02 crc kubenswrapper[4867]: I0214 05:57:02.300370 4867 generic.go:334] "Generic (PLEG): container finished" podID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" exitCode=0 Feb 14 05:57:02 crc kubenswrapper[4867]: I0214 05:57:02.300461 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerDied","Data":"85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154"} Feb 14 05:57:02 crc kubenswrapper[4867]: I0214 05:57:02.300848 4867 scope.go:117] "RemoveContainer" containerID="969e0cb4cefe8b8e5046ee62cca830ff3afc22fe72785a6b708c487b9ff93b5e" Feb 14 05:57:02 crc kubenswrapper[4867]: I0214 05:57:02.301782 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 05:57:02 crc kubenswrapper[4867]: E0214 05:57:02.302225 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:57:16 crc kubenswrapper[4867]: I0214 05:57:16.997994 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 05:57:17 crc kubenswrapper[4867]: E0214 05:57:17.008044 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:57:29 crc kubenswrapper[4867]: I0214 05:57:29.997784 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 05:57:30 crc kubenswrapper[4867]: E0214 05:57:29.999100 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:57:42 crc kubenswrapper[4867]: I0214 05:57:42.998686 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 05:57:43 crc kubenswrapper[4867]: E0214 05:57:42.999825 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:57:57 crc kubenswrapper[4867]: I0214 05:57:56.998282 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 05:57:57 crc kubenswrapper[4867]: E0214 05:57:56.999200 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:58:11 crc kubenswrapper[4867]: I0214 05:58:11.997735 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 05:58:11 crc kubenswrapper[4867]: E0214 05:58:11.999595 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:58:22 crc kubenswrapper[4867]: I0214 05:58:22.997758 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 05:58:22 crc kubenswrapper[4867]: E0214 05:58:22.998905 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:58:37 crc kubenswrapper[4867]: I0214 05:58:37.997849 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 05:58:37 crc kubenswrapper[4867]: E0214 05:58:37.998725 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:58:51 crc kubenswrapper[4867]: I0214 05:58:51.996919 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 05:58:51 crc kubenswrapper[4867]: E0214 05:58:51.997721 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:59:02 crc kubenswrapper[4867]: I0214 05:59:02.997396 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 05:59:02 crc kubenswrapper[4867]: E0214 05:59:02.999520 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:59:04 crc kubenswrapper[4867]: I0214 05:59:04.919723 4867 generic.go:334] "Generic (PLEG): container finished" podID="89d6412f-a37d-4f30-8c3a-9514185847fc" containerID="177c95f4e7826d6d799901d70a180712f443165780432f255fcb63f96509fb1c" exitCode=0 Feb 14 05:59:04 crc kubenswrapper[4867]: I0214 05:59:04.919885 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rtzc7/must-gather-wmzns" event={"ID":"89d6412f-a37d-4f30-8c3a-9514185847fc","Type":"ContainerDied","Data":"177c95f4e7826d6d799901d70a180712f443165780432f255fcb63f96509fb1c"} Feb 14 05:59:04 crc kubenswrapper[4867]: I0214 05:59:04.920991 4867 scope.go:117] "RemoveContainer" containerID="177c95f4e7826d6d799901d70a180712f443165780432f255fcb63f96509fb1c" Feb 14 05:59:05 crc kubenswrapper[4867]: I0214 05:59:05.046933 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rtzc7_must-gather-wmzns_89d6412f-a37d-4f30-8c3a-9514185847fc/gather/0.log" Feb 14 05:59:13 crc kubenswrapper[4867]: I0214 05:59:13.030322 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rtzc7/must-gather-wmzns"] Feb 14 05:59:13 crc kubenswrapper[4867]: I0214 05:59:13.031538 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-rtzc7/must-gather-wmzns" podUID="89d6412f-a37d-4f30-8c3a-9514185847fc" containerName="copy" containerID="cri-o://8bda962d52e435b73ab83aa35089685e683712a0b3acfa743e4df637f1d29a76" gracePeriod=2 Feb 14 05:59:13 crc kubenswrapper[4867]: I0214 05:59:13.051153 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rtzc7/must-gather-wmzns"] Feb 14 05:59:14 crc kubenswrapper[4867]: I0214 05:59:14.068935 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rtzc7_must-gather-wmzns_89d6412f-a37d-4f30-8c3a-9514185847fc/copy/0.log" Feb 14 05:59:14 crc kubenswrapper[4867]: I0214 05:59:14.070021 4867 generic.go:334] "Generic (PLEG): container finished" podID="89d6412f-a37d-4f30-8c3a-9514185847fc" containerID="8bda962d52e435b73ab83aa35089685e683712a0b3acfa743e4df637f1d29a76" exitCode=143 Feb 14 05:59:14 crc kubenswrapper[4867]: I0214 05:59:14.070112 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d6a5a00012c52a2aac1e8dffdc748b022caf87a8674b148896c8bda016c8acb" Feb 14 05:59:14 crc kubenswrapper[4867]: I0214 05:59:14.079547 4867 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rtzc7_must-gather-wmzns_89d6412f-a37d-4f30-8c3a-9514185847fc/copy/0.log" Feb 14 05:59:14 crc kubenswrapper[4867]: I0214 05:59:14.079902 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rtzc7/must-gather-wmzns" Feb 14 05:59:14 crc kubenswrapper[4867]: I0214 05:59:14.189124 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/89d6412f-a37d-4f30-8c3a-9514185847fc-must-gather-output\") pod \"89d6412f-a37d-4f30-8c3a-9514185847fc\" (UID: \"89d6412f-a37d-4f30-8c3a-9514185847fc\") " Feb 14 05:59:14 crc kubenswrapper[4867]: I0214 05:59:14.189813 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slmvv\" (UniqueName: \"kubernetes.io/projected/89d6412f-a37d-4f30-8c3a-9514185847fc-kube-api-access-slmvv\") pod \"89d6412f-a37d-4f30-8c3a-9514185847fc\" (UID: \"89d6412f-a37d-4f30-8c3a-9514185847fc\") " Feb 14 05:59:14 crc kubenswrapper[4867]: I0214 05:59:14.234709 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89d6412f-a37d-4f30-8c3a-9514185847fc-kube-api-access-slmvv" (OuterVolumeSpecName: "kube-api-access-slmvv") pod "89d6412f-a37d-4f30-8c3a-9514185847fc" (UID: "89d6412f-a37d-4f30-8c3a-9514185847fc"). InnerVolumeSpecName "kube-api-access-slmvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 05:59:14 crc kubenswrapper[4867]: I0214 05:59:14.294224 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slmvv\" (UniqueName: \"kubernetes.io/projected/89d6412f-a37d-4f30-8c3a-9514185847fc-kube-api-access-slmvv\") on node \"crc\" DevicePath \"\"" Feb 14 05:59:14 crc kubenswrapper[4867]: I0214 05:59:14.532356 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89d6412f-a37d-4f30-8c3a-9514185847fc-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "89d6412f-a37d-4f30-8c3a-9514185847fc" (UID: "89d6412f-a37d-4f30-8c3a-9514185847fc"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 05:59:14 crc kubenswrapper[4867]: I0214 05:59:14.602569 4867 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/89d6412f-a37d-4f30-8c3a-9514185847fc-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 14 05:59:15 crc kubenswrapper[4867]: I0214 05:59:15.011786 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89d6412f-a37d-4f30-8c3a-9514185847fc" path="/var/lib/kubelet/pods/89d6412f-a37d-4f30-8c3a-9514185847fc/volumes" Feb 14 05:59:15 crc kubenswrapper[4867]: I0214 05:59:15.080556 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rtzc7/must-gather-wmzns" Feb 14 05:59:15 crc kubenswrapper[4867]: I0214 05:59:15.998226 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 05:59:15 crc kubenswrapper[4867]: E0214 05:59:15.998760 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:59:19 crc kubenswrapper[4867]: I0214 05:59:19.708245 4867 scope.go:117] "RemoveContainer" containerID="8bda962d52e435b73ab83aa35089685e683712a0b3acfa743e4df637f1d29a76" Feb 14 05:59:19 crc kubenswrapper[4867]: I0214 05:59:19.761687 4867 scope.go:117] "RemoveContainer" containerID="177c95f4e7826d6d799901d70a180712f443165780432f255fcb63f96509fb1c" Feb 14 05:59:29 crc kubenswrapper[4867]: I0214 05:59:29.011929 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 05:59:29 crc kubenswrapper[4867]: E0214 05:59:29.037255 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:59:41 crc kubenswrapper[4867]: I0214 05:59:41.997457 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 05:59:41 crc kubenswrapper[4867]: E0214 05:59:41.998458 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 05:59:53 crc kubenswrapper[4867]: I0214 05:59:53.998189 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 05:59:54 crc kubenswrapper[4867]: E0214 05:59:54.000467 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.275729 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg"] Feb 14 06:00:00 crc kubenswrapper[4867]: E0214 06:00:00.278766 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89d6412f-a37d-4f30-8c3a-9514185847fc" containerName="copy" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.278811 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="89d6412f-a37d-4f30-8c3a-9514185847fc" containerName="copy" Feb 14 06:00:00 crc kubenswrapper[4867]: E0214 06:00:00.278866 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89d6412f-a37d-4f30-8c3a-9514185847fc" containerName="gather" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.278876 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="89d6412f-a37d-4f30-8c3a-9514185847fc" containerName="gather" Feb 14 06:00:00 crc kubenswrapper[4867]: E0214 06:00:00.278918 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d55eb762-847d-4073-b20e-d1f306d0a424" containerName="registry-server" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.278928 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d55eb762-847d-4073-b20e-d1f306d0a424" containerName="registry-server" Feb 14 06:00:00 crc kubenswrapper[4867]: E0214 06:00:00.278950 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d55eb762-847d-4073-b20e-d1f306d0a424" containerName="extract-content" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.278959 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d55eb762-847d-4073-b20e-d1f306d0a424" containerName="extract-content" Feb 14 06:00:00 crc kubenswrapper[4867]: E0214 06:00:00.279003 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d55eb762-847d-4073-b20e-d1f306d0a424" containerName="extract-utilities" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.279014 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="d55eb762-847d-4073-b20e-d1f306d0a424" containerName="extract-utilities" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.279331 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="89d6412f-a37d-4f30-8c3a-9514185847fc" containerName="gather" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.279366 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="89d6412f-a37d-4f30-8c3a-9514185847fc" containerName="copy" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.279386 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="d55eb762-847d-4073-b20e-d1f306d0a424" containerName="registry-server" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.280534 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.302241 4867 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.308404 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg"] Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.309843 4867 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.384529 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ggmh\" (UniqueName: \"kubernetes.io/projected/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-kube-api-access-7ggmh\") pod \"collect-profiles-29517480-pr6pg\" (UID: \"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.385321 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-config-volume\") pod \"collect-profiles-29517480-pr6pg\" (UID: \"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.385439 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-secret-volume\") pod \"collect-profiles-29517480-pr6pg\" (UID: \"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.487435 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ggmh\" (UniqueName: \"kubernetes.io/projected/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-kube-api-access-7ggmh\") pod \"collect-profiles-29517480-pr6pg\" (UID: \"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.487620 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-config-volume\") pod \"collect-profiles-29517480-pr6pg\" (UID: \"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.487693 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-secret-volume\") pod \"collect-profiles-29517480-pr6pg\" (UID: \"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.490268 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-config-volume\") pod \"collect-profiles-29517480-pr6pg\" (UID: \"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.513453 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-secret-volume\") pod \"collect-profiles-29517480-pr6pg\" (UID: \"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.519264 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ggmh\" (UniqueName: \"kubernetes.io/projected/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-kube-api-access-7ggmh\") pod \"collect-profiles-29517480-pr6pg\" (UID: \"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" Feb 14 06:00:00 crc kubenswrapper[4867]: I0214 06:00:00.619296 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" Feb 14 06:00:01 crc kubenswrapper[4867]: I0214 06:00:01.900145 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg"] Feb 14 06:00:02 crc kubenswrapper[4867]: I0214 06:00:02.767099 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" event={"ID":"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e","Type":"ContainerStarted","Data":"eb918c10b0fa3eab2238e9edf84e1078bf8602876b2df27c8616b050448c6f7d"} Feb 14 06:00:02 crc kubenswrapper[4867]: I0214 06:00:02.767537 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" event={"ID":"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e","Type":"ContainerStarted","Data":"5937cf6dba17116f3d3fb07faac4590c6dcaa85002d2afda6f134157b30dd561"} Feb 14 06:00:02 crc kubenswrapper[4867]: I0214 06:00:02.789435 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" podStartSLOduration=2.789411786 podStartE2EDuration="2.789411786s" podCreationTimestamp="2026-02-14 06:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 06:00:02.784117577 +0000 UTC m=+6634.865054901" watchObservedRunningTime="2026-02-14 06:00:02.789411786 +0000 UTC m=+6634.870349100" Feb 14 06:00:04 crc kubenswrapper[4867]: I0214 06:00:04.787487 4867 generic.go:334] "Generic (PLEG): container finished" podID="83b3f1b1-9207-4686-88ed-dd7ec0a3d00e" containerID="eb918c10b0fa3eab2238e9edf84e1078bf8602876b2df27c8616b050448c6f7d" exitCode=0 Feb 14 06:00:04 crc kubenswrapper[4867]: I0214 06:00:04.787555 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" event={"ID":"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e","Type":"ContainerDied","Data":"eb918c10b0fa3eab2238e9edf84e1078bf8602876b2df27c8616b050448c6f7d"} Feb 14 06:00:06 crc kubenswrapper[4867]: I0214 06:00:06.218052 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" Feb 14 06:00:06 crc kubenswrapper[4867]: I0214 06:00:06.364982 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-secret-volume\") pod \"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e\" (UID: \"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e\") " Feb 14 06:00:06 crc kubenswrapper[4867]: I0214 06:00:06.365442 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ggmh\" (UniqueName: \"kubernetes.io/projected/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-kube-api-access-7ggmh\") pod \"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e\" (UID: \"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e\") " Feb 14 06:00:06 crc kubenswrapper[4867]: I0214 06:00:06.365744 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-config-volume\") pod \"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e\" (UID: \"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e\") " Feb 14 06:00:06 crc kubenswrapper[4867]: I0214 06:00:06.366652 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-config-volume" (OuterVolumeSpecName: "config-volume") pod "83b3f1b1-9207-4686-88ed-dd7ec0a3d00e" (UID: "83b3f1b1-9207-4686-88ed-dd7ec0a3d00e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 14 06:00:06 crc kubenswrapper[4867]: I0214 06:00:06.367011 4867 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 14 06:00:06 crc kubenswrapper[4867]: I0214 06:00:06.372414 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-kube-api-access-7ggmh" (OuterVolumeSpecName: "kube-api-access-7ggmh") pod "83b3f1b1-9207-4686-88ed-dd7ec0a3d00e" (UID: "83b3f1b1-9207-4686-88ed-dd7ec0a3d00e"). InnerVolumeSpecName "kube-api-access-7ggmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 06:00:06 crc kubenswrapper[4867]: I0214 06:00:06.373325 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "83b3f1b1-9207-4686-88ed-dd7ec0a3d00e" (UID: "83b3f1b1-9207-4686-88ed-dd7ec0a3d00e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 06:00:06 crc kubenswrapper[4867]: I0214 06:00:06.469721 4867 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 14 06:00:06 crc kubenswrapper[4867]: I0214 06:00:06.469974 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ggmh\" (UniqueName: \"kubernetes.io/projected/83b3f1b1-9207-4686-88ed-dd7ec0a3d00e-kube-api-access-7ggmh\") on node \"crc\" DevicePath \"\"" Feb 14 06:00:06 crc kubenswrapper[4867]: I0214 06:00:06.810616 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" event={"ID":"83b3f1b1-9207-4686-88ed-dd7ec0a3d00e","Type":"ContainerDied","Data":"5937cf6dba17116f3d3fb07faac4590c6dcaa85002d2afda6f134157b30dd561"} Feb 14 06:00:06 crc kubenswrapper[4867]: I0214 06:00:06.810950 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5937cf6dba17116f3d3fb07faac4590c6dcaa85002d2afda6f134157b30dd561" Feb 14 06:00:06 crc kubenswrapper[4867]: I0214 06:00:06.810800 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29517480-pr6pg" Feb 14 06:00:06 crc kubenswrapper[4867]: I0214 06:00:06.885236 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924"] Feb 14 06:00:06 crc kubenswrapper[4867]: I0214 06:00:06.896590 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29517435-sp924"] Feb 14 06:00:06 crc kubenswrapper[4867]: I0214 06:00:06.998253 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 06:00:06 crc kubenswrapper[4867]: E0214 06:00:06.998561 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 06:00:07 crc kubenswrapper[4867]: I0214 06:00:07.010919 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d32d646-2d3a-40db-acb7-a2c9e410c655" path="/var/lib/kubelet/pods/4d32d646-2d3a-40db-acb7-a2c9e410c655/volumes" Feb 14 06:00:19 crc kubenswrapper[4867]: I0214 06:00:19.893639 4867 scope.go:117] "RemoveContainer" containerID="57685fa039b788fdc3d04fb1da2849cb66a1a8363710569f8bd5ff77b56239d6" Feb 14 06:00:20 crc kubenswrapper[4867]: I0214 06:00:20.997908 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 06:00:20 crc kubenswrapper[4867]: E0214 06:00:20.998419 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 06:00:34 crc kubenswrapper[4867]: I0214 06:00:34.998851 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 06:00:35 crc kubenswrapper[4867]: E0214 06:00:34.999918 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 06:00:44 crc kubenswrapper[4867]: I0214 06:00:44.762542 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="27437fd9-2bc5-48ac-9e34-e733da15dd2b" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 14 06:00:45 crc kubenswrapper[4867]: I0214 06:00:45.997401 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 06:00:45 crc kubenswrapper[4867]: E0214 06:00:45.998114 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 06:00:49 crc kubenswrapper[4867]: I0214 06:00:49.762071 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="27437fd9-2bc5-48ac-9e34-e733da15dd2b" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 14 06:00:54 crc kubenswrapper[4867]: I0214 06:00:54.764431 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="27437fd9-2bc5-48ac-9e34-e733da15dd2b" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Feb 14 06:00:54 crc kubenswrapper[4867]: I0214 06:00:54.765183 4867 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Feb 14 06:00:54 crc kubenswrapper[4867]: I0214 06:00:54.764714 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="27437fd9-2bc5-48ac-9e34-e733da15dd2b" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Feb 14 06:00:54 crc kubenswrapper[4867]: I0214 06:00:54.769306 4867 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"b41170ee2bb16f2e334839addb6382f3dd37db9fe4c0c536cea87f10a0681b84"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Feb 14 06:00:54 crc kubenswrapper[4867]: I0214 06:00:54.769545 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27437fd9-2bc5-48ac-9e34-e733da15dd2b" containerName="ceilometer-central-agent" containerID="cri-o://b41170ee2bb16f2e334839addb6382f3dd37db9fe4c0c536cea87f10a0681b84" gracePeriod=30 Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.198403 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29517481-xvtzl"] Feb 14 06:01:00 crc kubenswrapper[4867]: E0214 06:01:00.200323 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83b3f1b1-9207-4686-88ed-dd7ec0a3d00e" containerName="collect-profiles" Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.200370 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="83b3f1b1-9207-4686-88ed-dd7ec0a3d00e" containerName="collect-profiles" Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.200666 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="83b3f1b1-9207-4686-88ed-dd7ec0a3d00e" containerName="collect-profiles" Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.201542 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29517481-xvtzl" Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.225971 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29517481-xvtzl"] Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.321482 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5lgn\" (UniqueName: \"kubernetes.io/projected/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-kube-api-access-s5lgn\") pod \"keystone-cron-29517481-xvtzl\" (UID: \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\") " pod="openstack/keystone-cron-29517481-xvtzl" Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.322834 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-config-data\") pod \"keystone-cron-29517481-xvtzl\" (UID: \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\") " pod="openstack/keystone-cron-29517481-xvtzl" Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.323106 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-combined-ca-bundle\") pod \"keystone-cron-29517481-xvtzl\" (UID: \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\") " pod="openstack/keystone-cron-29517481-xvtzl" Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.323825 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-fernet-keys\") pod \"keystone-cron-29517481-xvtzl\" (UID: \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\") " pod="openstack/keystone-cron-29517481-xvtzl" Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.426107 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5lgn\" (UniqueName: \"kubernetes.io/projected/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-kube-api-access-s5lgn\") pod \"keystone-cron-29517481-xvtzl\" (UID: \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\") " pod="openstack/keystone-cron-29517481-xvtzl" Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.426199 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-config-data\") pod \"keystone-cron-29517481-xvtzl\" (UID: \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\") " pod="openstack/keystone-cron-29517481-xvtzl" Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.426224 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-combined-ca-bundle\") pod \"keystone-cron-29517481-xvtzl\" (UID: \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\") " pod="openstack/keystone-cron-29517481-xvtzl" Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.426293 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-fernet-keys\") pod \"keystone-cron-29517481-xvtzl\" (UID: \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\") " pod="openstack/keystone-cron-29517481-xvtzl" Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.442604 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-combined-ca-bundle\") pod \"keystone-cron-29517481-xvtzl\" (UID: \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\") " pod="openstack/keystone-cron-29517481-xvtzl" Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.443087 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-config-data\") pod \"keystone-cron-29517481-xvtzl\" (UID: \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\") " pod="openstack/keystone-cron-29517481-xvtzl" Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.449567 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-fernet-keys\") pod \"keystone-cron-29517481-xvtzl\" (UID: \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\") " pod="openstack/keystone-cron-29517481-xvtzl" Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.454255 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5lgn\" (UniqueName: \"kubernetes.io/projected/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-kube-api-access-s5lgn\") pod \"keystone-cron-29517481-xvtzl\" (UID: \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\") " pod="openstack/keystone-cron-29517481-xvtzl" Feb 14 06:01:00 crc kubenswrapper[4867]: I0214 06:01:00.554633 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29517481-xvtzl" Feb 14 06:01:01 crc kubenswrapper[4867]: I0214 06:01:01.003656 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 06:01:01 crc kubenswrapper[4867]: E0214 06:01:01.004729 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 06:01:01 crc kubenswrapper[4867]: I0214 06:01:01.296342 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29517481-xvtzl"] Feb 14 06:01:01 crc kubenswrapper[4867]: I0214 06:01:01.462338 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29517481-xvtzl" event={"ID":"948cecc5-1590-4c1e-b8c5-75d4c05abc2e","Type":"ContainerStarted","Data":"b4f8cd758036799a7597373912ef0a8ff1feee20e55cfc07ae58fb236331fbf2"} Feb 14 06:01:14 crc kubenswrapper[4867]: I0214 06:01:14.997912 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 06:01:15 crc kubenswrapper[4867]: E0214 06:01:14.999411 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 06:01:19 crc kubenswrapper[4867]: I0214 06:01:19.750216 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29517481-xvtzl" event={"ID":"948cecc5-1590-4c1e-b8c5-75d4c05abc2e","Type":"ContainerStarted","Data":"af313107cf808073398af8332b5402e83f1649a10d5e262a4b1d2513f24ea6c4"} Feb 14 06:01:19 crc kubenswrapper[4867]: I0214 06:01:19.777873 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29517481-xvtzl" podStartSLOduration=19.777854687 podStartE2EDuration="19.777854687s" podCreationTimestamp="2026-02-14 06:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-14 06:01:19.774690744 +0000 UTC m=+6711.855628058" watchObservedRunningTime="2026-02-14 06:01:19.777854687 +0000 UTC m=+6711.858792001" Feb 14 06:01:21 crc kubenswrapper[4867]: I0214 06:01:21.496291 4867 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 14 06:01:21 crc kubenswrapper[4867]: I0214 06:01:21.781729 4867 generic.go:334] "Generic (PLEG): container finished" podID="27437fd9-2bc5-48ac-9e34-e733da15dd2b" containerID="b41170ee2bb16f2e334839addb6382f3dd37db9fe4c0c536cea87f10a0681b84" exitCode=0 Feb 14 06:01:21 crc kubenswrapper[4867]: I0214 06:01:21.781777 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27437fd9-2bc5-48ac-9e34-e733da15dd2b","Type":"ContainerDied","Data":"b41170ee2bb16f2e334839addb6382f3dd37db9fe4c0c536cea87f10a0681b84"} Feb 14 06:01:21 crc kubenswrapper[4867]: I0214 06:01:21.781817 4867 scope.go:117] "RemoveContainer" containerID="86c896e795193cbc041ce48aa8f5cfb49ed56bfd923d3ce2eec001f309e51bd7" Feb 14 06:01:22 crc kubenswrapper[4867]: I0214 06:01:22.796819 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27437fd9-2bc5-48ac-9e34-e733da15dd2b","Type":"ContainerStarted","Data":"71714cb23ecd923ca245480a524041bb02e6c9e3073f3c792d1b4ec0a66caae9"} Feb 14 06:01:22 crc kubenswrapper[4867]: I0214 06:01:22.799110 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29517481-xvtzl" event={"ID":"948cecc5-1590-4c1e-b8c5-75d4c05abc2e","Type":"ContainerDied","Data":"af313107cf808073398af8332b5402e83f1649a10d5e262a4b1d2513f24ea6c4"} Feb 14 06:01:22 crc kubenswrapper[4867]: I0214 06:01:22.799003 4867 generic.go:334] "Generic (PLEG): container finished" podID="948cecc5-1590-4c1e-b8c5-75d4c05abc2e" containerID="af313107cf808073398af8332b5402e83f1649a10d5e262a4b1d2513f24ea6c4" exitCode=0 Feb 14 06:01:24 crc kubenswrapper[4867]: I0214 06:01:24.254678 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29517481-xvtzl" Feb 14 06:01:24 crc kubenswrapper[4867]: I0214 06:01:24.419807 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-fernet-keys\") pod \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\" (UID: \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\") " Feb 14 06:01:24 crc kubenswrapper[4867]: I0214 06:01:24.420239 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-config-data\") pod \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\" (UID: \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\") " Feb 14 06:01:24 crc kubenswrapper[4867]: I0214 06:01:24.420376 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5lgn\" (UniqueName: \"kubernetes.io/projected/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-kube-api-access-s5lgn\") pod \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\" (UID: \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\") " Feb 14 06:01:24 crc kubenswrapper[4867]: I0214 06:01:24.420662 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-combined-ca-bundle\") pod \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\" (UID: \"948cecc5-1590-4c1e-b8c5-75d4c05abc2e\") " Feb 14 06:01:24 crc kubenswrapper[4867]: I0214 06:01:24.443248 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-kube-api-access-s5lgn" (OuterVolumeSpecName: "kube-api-access-s5lgn") pod "948cecc5-1590-4c1e-b8c5-75d4c05abc2e" (UID: "948cecc5-1590-4c1e-b8c5-75d4c05abc2e"). InnerVolumeSpecName "kube-api-access-s5lgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 06:01:24 crc kubenswrapper[4867]: I0214 06:01:24.444456 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "948cecc5-1590-4c1e-b8c5-75d4c05abc2e" (UID: "948cecc5-1590-4c1e-b8c5-75d4c05abc2e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 06:01:24 crc kubenswrapper[4867]: I0214 06:01:24.479326 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "948cecc5-1590-4c1e-b8c5-75d4c05abc2e" (UID: "948cecc5-1590-4c1e-b8c5-75d4c05abc2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 06:01:24 crc kubenswrapper[4867]: I0214 06:01:24.507352 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-config-data" (OuterVolumeSpecName: "config-data") pod "948cecc5-1590-4c1e-b8c5-75d4c05abc2e" (UID: "948cecc5-1590-4c1e-b8c5-75d4c05abc2e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 14 06:01:24 crc kubenswrapper[4867]: I0214 06:01:24.523538 4867 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 14 06:01:24 crc kubenswrapper[4867]: I0214 06:01:24.523586 4867 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 14 06:01:24 crc kubenswrapper[4867]: I0214 06:01:24.523596 4867 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-config-data\") on node \"crc\" DevicePath \"\"" Feb 14 06:01:24 crc kubenswrapper[4867]: I0214 06:01:24.523606 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5lgn\" (UniqueName: \"kubernetes.io/projected/948cecc5-1590-4c1e-b8c5-75d4c05abc2e-kube-api-access-s5lgn\") on node \"crc\" DevicePath \"\"" Feb 14 06:01:24 crc kubenswrapper[4867]: I0214 06:01:24.825648 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29517481-xvtzl" event={"ID":"948cecc5-1590-4c1e-b8c5-75d4c05abc2e","Type":"ContainerDied","Data":"b4f8cd758036799a7597373912ef0a8ff1feee20e55cfc07ae58fb236331fbf2"} Feb 14 06:01:24 crc kubenswrapper[4867]: I0214 06:01:24.825694 4867 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4f8cd758036799a7597373912ef0a8ff1feee20e55cfc07ae58fb236331fbf2" Feb 14 06:01:24 crc kubenswrapper[4867]: I0214 06:01:24.825890 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29517481-xvtzl" Feb 14 06:01:26 crc kubenswrapper[4867]: I0214 06:01:26.998673 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 06:01:27 crc kubenswrapper[4867]: E0214 06:01:26.999434 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 06:01:39 crc kubenswrapper[4867]: I0214 06:01:39.019208 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 06:01:39 crc kubenswrapper[4867]: E0214 06:01:39.020319 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 06:01:49 crc kubenswrapper[4867]: I0214 06:01:49.998462 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 06:01:50 crc kubenswrapper[4867]: E0214 06:01:49.999529 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 06:02:00 crc kubenswrapper[4867]: I0214 06:02:00.997433 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 06:02:01 crc kubenswrapper[4867]: E0214 06:02:00.998574 4867 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4s95t_openshift-machine-config-operator(5992e46c-bce7-4b9f-82f2-c7ffb93286cd)\"" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" Feb 14 06:02:14 crc kubenswrapper[4867]: I0214 06:02:14.998953 4867 scope.go:117] "RemoveContainer" containerID="85cc1629feee14dea1a79134dc431065e3e76ce7010ce3c502e802c3ae8c3154" Feb 14 06:02:15 crc kubenswrapper[4867]: I0214 06:02:15.494447 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" event={"ID":"5992e46c-bce7-4b9f-82f2-c7ffb93286cd","Type":"ContainerStarted","Data":"92bd21b391618693b38219f6b0a3cae0e5df83bf07f4ba2e4705f2380a1917b6"} Feb 14 06:02:52 crc kubenswrapper[4867]: I0214 06:02:52.276465 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bp6ls"] Feb 14 06:02:52 crc kubenswrapper[4867]: E0214 06:02:52.278625 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="948cecc5-1590-4c1e-b8c5-75d4c05abc2e" containerName="keystone-cron" Feb 14 06:02:52 crc kubenswrapper[4867]: I0214 06:02:52.278647 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="948cecc5-1590-4c1e-b8c5-75d4c05abc2e" containerName="keystone-cron" Feb 14 06:02:52 crc kubenswrapper[4867]: I0214 06:02:52.279208 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="948cecc5-1590-4c1e-b8c5-75d4c05abc2e" containerName="keystone-cron" Feb 14 06:02:52 crc kubenswrapper[4867]: I0214 06:02:52.281804 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:02:52 crc kubenswrapper[4867]: I0214 06:02:52.299497 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bp6ls"] Feb 14 06:02:52 crc kubenswrapper[4867]: I0214 06:02:52.326571 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b226f7b-fb10-4b1a-a225-587c9afaa99f-catalog-content\") pod \"certified-operators-bp6ls\" (UID: \"4b226f7b-fb10-4b1a-a225-587c9afaa99f\") " pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:02:52 crc kubenswrapper[4867]: I0214 06:02:52.326684 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b226f7b-fb10-4b1a-a225-587c9afaa99f-utilities\") pod \"certified-operators-bp6ls\" (UID: \"4b226f7b-fb10-4b1a-a225-587c9afaa99f\") " pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:02:52 crc kubenswrapper[4867]: I0214 06:02:52.326746 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2dtz\" (UniqueName: \"kubernetes.io/projected/4b226f7b-fb10-4b1a-a225-587c9afaa99f-kube-api-access-d2dtz\") pod \"certified-operators-bp6ls\" (UID: \"4b226f7b-fb10-4b1a-a225-587c9afaa99f\") " pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:02:52 crc kubenswrapper[4867]: I0214 06:02:52.429284 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2dtz\" (UniqueName: \"kubernetes.io/projected/4b226f7b-fb10-4b1a-a225-587c9afaa99f-kube-api-access-d2dtz\") pod \"certified-operators-bp6ls\" (UID: \"4b226f7b-fb10-4b1a-a225-587c9afaa99f\") " pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:02:52 crc kubenswrapper[4867]: I0214 06:02:52.429537 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b226f7b-fb10-4b1a-a225-587c9afaa99f-catalog-content\") pod \"certified-operators-bp6ls\" (UID: \"4b226f7b-fb10-4b1a-a225-587c9afaa99f\") " pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:02:52 crc kubenswrapper[4867]: I0214 06:02:52.429653 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b226f7b-fb10-4b1a-a225-587c9afaa99f-utilities\") pod \"certified-operators-bp6ls\" (UID: \"4b226f7b-fb10-4b1a-a225-587c9afaa99f\") " pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:02:52 crc kubenswrapper[4867]: I0214 06:02:52.430633 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b226f7b-fb10-4b1a-a225-587c9afaa99f-catalog-content\") pod \"certified-operators-bp6ls\" (UID: \"4b226f7b-fb10-4b1a-a225-587c9afaa99f\") " pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:02:52 crc kubenswrapper[4867]: I0214 06:02:52.430701 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b226f7b-fb10-4b1a-a225-587c9afaa99f-utilities\") pod \"certified-operators-bp6ls\" (UID: \"4b226f7b-fb10-4b1a-a225-587c9afaa99f\") " pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:02:52 crc kubenswrapper[4867]: I0214 06:02:52.458743 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2dtz\" (UniqueName: \"kubernetes.io/projected/4b226f7b-fb10-4b1a-a225-587c9afaa99f-kube-api-access-d2dtz\") pod \"certified-operators-bp6ls\" (UID: \"4b226f7b-fb10-4b1a-a225-587c9afaa99f\") " pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:02:52 crc kubenswrapper[4867]: I0214 06:02:52.613395 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:02:53 crc kubenswrapper[4867]: I0214 06:02:53.207007 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bp6ls"] Feb 14 06:02:54 crc kubenswrapper[4867]: I0214 06:02:54.035985 4867 generic.go:334] "Generic (PLEG): container finished" podID="4b226f7b-fb10-4b1a-a225-587c9afaa99f" containerID="de503032f08be5fc3bffc0d2d0f625246c2e6d8b851ecedc23302420fd9d068f" exitCode=0 Feb 14 06:02:54 crc kubenswrapper[4867]: I0214 06:02:54.036032 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bp6ls" event={"ID":"4b226f7b-fb10-4b1a-a225-587c9afaa99f","Type":"ContainerDied","Data":"de503032f08be5fc3bffc0d2d0f625246c2e6d8b851ecedc23302420fd9d068f"} Feb 14 06:02:54 crc kubenswrapper[4867]: I0214 06:02:54.036345 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bp6ls" event={"ID":"4b226f7b-fb10-4b1a-a225-587c9afaa99f","Type":"ContainerStarted","Data":"a3992203babd7ad31e38237c380935b9570505dceb845b8dacf9d8bf92050df0"} Feb 14 06:02:55 crc kubenswrapper[4867]: I0214 06:02:55.048310 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bp6ls" event={"ID":"4b226f7b-fb10-4b1a-a225-587c9afaa99f","Type":"ContainerStarted","Data":"f5501610fe8e0922fb72b91ffd3a5ca2b8292983fe1aefe1e493daa42e69cb0d"} Feb 14 06:02:57 crc kubenswrapper[4867]: I0214 06:02:57.082549 4867 generic.go:334] "Generic (PLEG): container finished" podID="4b226f7b-fb10-4b1a-a225-587c9afaa99f" containerID="f5501610fe8e0922fb72b91ffd3a5ca2b8292983fe1aefe1e493daa42e69cb0d" exitCode=0 Feb 14 06:02:57 crc kubenswrapper[4867]: I0214 06:02:57.082632 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bp6ls" event={"ID":"4b226f7b-fb10-4b1a-a225-587c9afaa99f","Type":"ContainerDied","Data":"f5501610fe8e0922fb72b91ffd3a5ca2b8292983fe1aefe1e493daa42e69cb0d"} Feb 14 06:02:58 crc kubenswrapper[4867]: I0214 06:02:58.112642 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bp6ls" event={"ID":"4b226f7b-fb10-4b1a-a225-587c9afaa99f","Type":"ContainerStarted","Data":"dc1de924f0bff90f09c48bd209c7d5185435f5f557aa54e03dbd435d4b987ff8"} Feb 14 06:02:58 crc kubenswrapper[4867]: I0214 06:02:58.141249 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bp6ls" podStartSLOduration=2.719328822 podStartE2EDuration="6.141223682s" podCreationTimestamp="2026-02-14 06:02:52 +0000 UTC" firstStartedPulling="2026-02-14 06:02:54.038364753 +0000 UTC m=+6806.119302067" lastFinishedPulling="2026-02-14 06:02:57.460259613 +0000 UTC m=+6809.541196927" observedRunningTime="2026-02-14 06:02:58.133664274 +0000 UTC m=+6810.214601608" watchObservedRunningTime="2026-02-14 06:02:58.141223682 +0000 UTC m=+6810.222161016" Feb 14 06:03:02 crc kubenswrapper[4867]: I0214 06:03:02.614088 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:03:02 crc kubenswrapper[4867]: I0214 06:03:02.614916 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:03:03 crc kubenswrapper[4867]: I0214 06:03:03.687820 4867 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-bp6ls" podUID="4b226f7b-fb10-4b1a-a225-587c9afaa99f" containerName="registry-server" probeResult="failure" output=< Feb 14 06:03:03 crc kubenswrapper[4867]: timeout: failed to connect service ":50051" within 1s Feb 14 06:03:03 crc kubenswrapper[4867]: > Feb 14 06:03:12 crc kubenswrapper[4867]: I0214 06:03:12.674968 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:03:12 crc kubenswrapper[4867]: I0214 06:03:12.752661 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:03:12 crc kubenswrapper[4867]: I0214 06:03:12.923414 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bp6ls"] Feb 14 06:03:14 crc kubenswrapper[4867]: I0214 06:03:14.328892 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bp6ls" podUID="4b226f7b-fb10-4b1a-a225-587c9afaa99f" containerName="registry-server" containerID="cri-o://dc1de924f0bff90f09c48bd209c7d5185435f5f557aa54e03dbd435d4b987ff8" gracePeriod=2 Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.309674 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.343325 4867 generic.go:334] "Generic (PLEG): container finished" podID="4b226f7b-fb10-4b1a-a225-587c9afaa99f" containerID="dc1de924f0bff90f09c48bd209c7d5185435f5f557aa54e03dbd435d4b987ff8" exitCode=0 Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.343383 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bp6ls" event={"ID":"4b226f7b-fb10-4b1a-a225-587c9afaa99f","Type":"ContainerDied","Data":"dc1de924f0bff90f09c48bd209c7d5185435f5f557aa54e03dbd435d4b987ff8"} Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.343410 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bp6ls" event={"ID":"4b226f7b-fb10-4b1a-a225-587c9afaa99f","Type":"ContainerDied","Data":"a3992203babd7ad31e38237c380935b9570505dceb845b8dacf9d8bf92050df0"} Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.343427 4867 scope.go:117] "RemoveContainer" containerID="dc1de924f0bff90f09c48bd209c7d5185435f5f557aa54e03dbd435d4b987ff8" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.343581 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bp6ls" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.389827 4867 scope.go:117] "RemoveContainer" containerID="f5501610fe8e0922fb72b91ffd3a5ca2b8292983fe1aefe1e493daa42e69cb0d" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.428265 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b226f7b-fb10-4b1a-a225-587c9afaa99f-utilities\") pod \"4b226f7b-fb10-4b1a-a225-587c9afaa99f\" (UID: \"4b226f7b-fb10-4b1a-a225-587c9afaa99f\") " Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.428364 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b226f7b-fb10-4b1a-a225-587c9afaa99f-catalog-content\") pod \"4b226f7b-fb10-4b1a-a225-587c9afaa99f\" (UID: \"4b226f7b-fb10-4b1a-a225-587c9afaa99f\") " Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.428414 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2dtz\" (UniqueName: \"kubernetes.io/projected/4b226f7b-fb10-4b1a-a225-587c9afaa99f-kube-api-access-d2dtz\") pod \"4b226f7b-fb10-4b1a-a225-587c9afaa99f\" (UID: \"4b226f7b-fb10-4b1a-a225-587c9afaa99f\") " Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.429379 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b226f7b-fb10-4b1a-a225-587c9afaa99f-utilities" (OuterVolumeSpecName: "utilities") pod "4b226f7b-fb10-4b1a-a225-587c9afaa99f" (UID: "4b226f7b-fb10-4b1a-a225-587c9afaa99f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.429861 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b226f7b-fb10-4b1a-a225-587c9afaa99f-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.450766 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b226f7b-fb10-4b1a-a225-587c9afaa99f-kube-api-access-d2dtz" (OuterVolumeSpecName: "kube-api-access-d2dtz") pod "4b226f7b-fb10-4b1a-a225-587c9afaa99f" (UID: "4b226f7b-fb10-4b1a-a225-587c9afaa99f"). InnerVolumeSpecName "kube-api-access-d2dtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.450846 4867 scope.go:117] "RemoveContainer" containerID="de503032f08be5fc3bffc0d2d0f625246c2e6d8b851ecedc23302420fd9d068f" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.532322 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2dtz\" (UniqueName: \"kubernetes.io/projected/4b226f7b-fb10-4b1a-a225-587c9afaa99f-kube-api-access-d2dtz\") on node \"crc\" DevicePath \"\"" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.588881 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b226f7b-fb10-4b1a-a225-587c9afaa99f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b226f7b-fb10-4b1a-a225-587c9afaa99f" (UID: "4b226f7b-fb10-4b1a-a225-587c9afaa99f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.609859 4867 scope.go:117] "RemoveContainer" containerID="dc1de924f0bff90f09c48bd209c7d5185435f5f557aa54e03dbd435d4b987ff8" Feb 14 06:03:15 crc kubenswrapper[4867]: E0214 06:03:15.615801 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc1de924f0bff90f09c48bd209c7d5185435f5f557aa54e03dbd435d4b987ff8\": container with ID starting with dc1de924f0bff90f09c48bd209c7d5185435f5f557aa54e03dbd435d4b987ff8 not found: ID does not exist" containerID="dc1de924f0bff90f09c48bd209c7d5185435f5f557aa54e03dbd435d4b987ff8" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.615855 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc1de924f0bff90f09c48bd209c7d5185435f5f557aa54e03dbd435d4b987ff8"} err="failed to get container status \"dc1de924f0bff90f09c48bd209c7d5185435f5f557aa54e03dbd435d4b987ff8\": rpc error: code = NotFound desc = could not find container \"dc1de924f0bff90f09c48bd209c7d5185435f5f557aa54e03dbd435d4b987ff8\": container with ID starting with dc1de924f0bff90f09c48bd209c7d5185435f5f557aa54e03dbd435d4b987ff8 not found: ID does not exist" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.615892 4867 scope.go:117] "RemoveContainer" containerID="f5501610fe8e0922fb72b91ffd3a5ca2b8292983fe1aefe1e493daa42e69cb0d" Feb 14 06:03:15 crc kubenswrapper[4867]: E0214 06:03:15.616397 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5501610fe8e0922fb72b91ffd3a5ca2b8292983fe1aefe1e493daa42e69cb0d\": container with ID starting with f5501610fe8e0922fb72b91ffd3a5ca2b8292983fe1aefe1e493daa42e69cb0d not found: ID does not exist" containerID="f5501610fe8e0922fb72b91ffd3a5ca2b8292983fe1aefe1e493daa42e69cb0d" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.616440 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5501610fe8e0922fb72b91ffd3a5ca2b8292983fe1aefe1e493daa42e69cb0d"} err="failed to get container status \"f5501610fe8e0922fb72b91ffd3a5ca2b8292983fe1aefe1e493daa42e69cb0d\": rpc error: code = NotFound desc = could not find container \"f5501610fe8e0922fb72b91ffd3a5ca2b8292983fe1aefe1e493daa42e69cb0d\": container with ID starting with f5501610fe8e0922fb72b91ffd3a5ca2b8292983fe1aefe1e493daa42e69cb0d not found: ID does not exist" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.616466 4867 scope.go:117] "RemoveContainer" containerID="de503032f08be5fc3bffc0d2d0f625246c2e6d8b851ecedc23302420fd9d068f" Feb 14 06:03:15 crc kubenswrapper[4867]: E0214 06:03:15.617986 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de503032f08be5fc3bffc0d2d0f625246c2e6d8b851ecedc23302420fd9d068f\": container with ID starting with de503032f08be5fc3bffc0d2d0f625246c2e6d8b851ecedc23302420fd9d068f not found: ID does not exist" containerID="de503032f08be5fc3bffc0d2d0f625246c2e6d8b851ecedc23302420fd9d068f" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.618029 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de503032f08be5fc3bffc0d2d0f625246c2e6d8b851ecedc23302420fd9d068f"} err="failed to get container status \"de503032f08be5fc3bffc0d2d0f625246c2e6d8b851ecedc23302420fd9d068f\": rpc error: code = NotFound desc = could not find container \"de503032f08be5fc3bffc0d2d0f625246c2e6d8b851ecedc23302420fd9d068f\": container with ID starting with de503032f08be5fc3bffc0d2d0f625246c2e6d8b851ecedc23302420fd9d068f not found: ID does not exist" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.634995 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b226f7b-fb10-4b1a-a225-587c9afaa99f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.681452 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bp6ls"] Feb 14 06:03:15 crc kubenswrapper[4867]: I0214 06:03:15.696249 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bp6ls"] Feb 14 06:03:17 crc kubenswrapper[4867]: I0214 06:03:17.029843 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b226f7b-fb10-4b1a-a225-587c9afaa99f" path="/var/lib/kubelet/pods/4b226f7b-fb10-4b1a-a225-587c9afaa99f/volumes" Feb 14 06:03:37 crc kubenswrapper[4867]: I0214 06:03:37.812762 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-skdnt"] Feb 14 06:03:37 crc kubenswrapper[4867]: E0214 06:03:37.814538 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b226f7b-fb10-4b1a-a225-587c9afaa99f" containerName="extract-content" Feb 14 06:03:37 crc kubenswrapper[4867]: I0214 06:03:37.814559 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b226f7b-fb10-4b1a-a225-587c9afaa99f" containerName="extract-content" Feb 14 06:03:37 crc kubenswrapper[4867]: E0214 06:03:37.814589 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b226f7b-fb10-4b1a-a225-587c9afaa99f" containerName="extract-utilities" Feb 14 06:03:37 crc kubenswrapper[4867]: I0214 06:03:37.814595 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b226f7b-fb10-4b1a-a225-587c9afaa99f" containerName="extract-utilities" Feb 14 06:03:37 crc kubenswrapper[4867]: E0214 06:03:37.814610 4867 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b226f7b-fb10-4b1a-a225-587c9afaa99f" containerName="registry-server" Feb 14 06:03:37 crc kubenswrapper[4867]: I0214 06:03:37.814617 4867 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b226f7b-fb10-4b1a-a225-587c9afaa99f" containerName="registry-server" Feb 14 06:03:37 crc kubenswrapper[4867]: I0214 06:03:37.814922 4867 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b226f7b-fb10-4b1a-a225-587c9afaa99f" containerName="registry-server" Feb 14 06:03:37 crc kubenswrapper[4867]: I0214 06:03:37.818267 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:37 crc kubenswrapper[4867]: I0214 06:03:37.838174 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-skdnt"] Feb 14 06:03:37 crc kubenswrapper[4867]: I0214 06:03:37.916059 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f920796-3206-4c6a-ad78-e8a2b2c07c79-catalog-content\") pod \"community-operators-skdnt\" (UID: \"1f920796-3206-4c6a-ad78-e8a2b2c07c79\") " pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:37 crc kubenswrapper[4867]: I0214 06:03:37.916446 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f920796-3206-4c6a-ad78-e8a2b2c07c79-utilities\") pod \"community-operators-skdnt\" (UID: \"1f920796-3206-4c6a-ad78-e8a2b2c07c79\") " pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:37 crc kubenswrapper[4867]: I0214 06:03:37.916467 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9kcl\" (UniqueName: \"kubernetes.io/projected/1f920796-3206-4c6a-ad78-e8a2b2c07c79-kube-api-access-f9kcl\") pod \"community-operators-skdnt\" (UID: \"1f920796-3206-4c6a-ad78-e8a2b2c07c79\") " pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:38 crc kubenswrapper[4867]: I0214 06:03:38.018889 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f920796-3206-4c6a-ad78-e8a2b2c07c79-utilities\") pod \"community-operators-skdnt\" (UID: \"1f920796-3206-4c6a-ad78-e8a2b2c07c79\") " pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:38 crc kubenswrapper[4867]: I0214 06:03:38.018931 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9kcl\" (UniqueName: \"kubernetes.io/projected/1f920796-3206-4c6a-ad78-e8a2b2c07c79-kube-api-access-f9kcl\") pod \"community-operators-skdnt\" (UID: \"1f920796-3206-4c6a-ad78-e8a2b2c07c79\") " pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:38 crc kubenswrapper[4867]: I0214 06:03:38.019196 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f920796-3206-4c6a-ad78-e8a2b2c07c79-catalog-content\") pod \"community-operators-skdnt\" (UID: \"1f920796-3206-4c6a-ad78-e8a2b2c07c79\") " pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:38 crc kubenswrapper[4867]: I0214 06:03:38.019382 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f920796-3206-4c6a-ad78-e8a2b2c07c79-utilities\") pod \"community-operators-skdnt\" (UID: \"1f920796-3206-4c6a-ad78-e8a2b2c07c79\") " pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:38 crc kubenswrapper[4867]: I0214 06:03:38.019591 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f920796-3206-4c6a-ad78-e8a2b2c07c79-catalog-content\") pod \"community-operators-skdnt\" (UID: \"1f920796-3206-4c6a-ad78-e8a2b2c07c79\") " pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:38 crc kubenswrapper[4867]: I0214 06:03:38.049435 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9kcl\" (UniqueName: \"kubernetes.io/projected/1f920796-3206-4c6a-ad78-e8a2b2c07c79-kube-api-access-f9kcl\") pod \"community-operators-skdnt\" (UID: \"1f920796-3206-4c6a-ad78-e8a2b2c07c79\") " pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:38 crc kubenswrapper[4867]: I0214 06:03:38.149028 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:38 crc kubenswrapper[4867]: I0214 06:03:38.731875 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-skdnt"] Feb 14 06:03:39 crc kubenswrapper[4867]: I0214 06:03:39.661723 4867 generic.go:334] "Generic (PLEG): container finished" podID="1f920796-3206-4c6a-ad78-e8a2b2c07c79" containerID="fc34ec4dacce9de548210d3888519f4ed35c73971d99af5acd89710e680fde95" exitCode=0 Feb 14 06:03:39 crc kubenswrapper[4867]: I0214 06:03:39.662267 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-skdnt" event={"ID":"1f920796-3206-4c6a-ad78-e8a2b2c07c79","Type":"ContainerDied","Data":"fc34ec4dacce9de548210d3888519f4ed35c73971d99af5acd89710e680fde95"} Feb 14 06:03:39 crc kubenswrapper[4867]: I0214 06:03:39.662293 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-skdnt" event={"ID":"1f920796-3206-4c6a-ad78-e8a2b2c07c79","Type":"ContainerStarted","Data":"67d6cd647c70f60815e5468464561e04331f9bada9a699f6c2a9522d742b4aec"} Feb 14 06:03:40 crc kubenswrapper[4867]: I0214 06:03:40.674573 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-skdnt" event={"ID":"1f920796-3206-4c6a-ad78-e8a2b2c07c79","Type":"ContainerStarted","Data":"270740dc153ad9475050bc2543190184b7cbfe3e2e0f4304c8365db7d151b101"} Feb 14 06:03:42 crc kubenswrapper[4867]: I0214 06:03:42.698926 4867 generic.go:334] "Generic (PLEG): container finished" podID="1f920796-3206-4c6a-ad78-e8a2b2c07c79" containerID="270740dc153ad9475050bc2543190184b7cbfe3e2e0f4304c8365db7d151b101" exitCode=0 Feb 14 06:03:42 crc kubenswrapper[4867]: I0214 06:03:42.699000 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-skdnt" event={"ID":"1f920796-3206-4c6a-ad78-e8a2b2c07c79","Type":"ContainerDied","Data":"270740dc153ad9475050bc2543190184b7cbfe3e2e0f4304c8365db7d151b101"} Feb 14 06:03:43 crc kubenswrapper[4867]: I0214 06:03:43.719743 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-skdnt" event={"ID":"1f920796-3206-4c6a-ad78-e8a2b2c07c79","Type":"ContainerStarted","Data":"6771b5c9272a56fa84c233588bcb4b4619981c67cfc8b4f6090cd4151faec5c9"} Feb 14 06:03:43 crc kubenswrapper[4867]: I0214 06:03:43.744619 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-skdnt" podStartSLOduration=3.3298954800000002 podStartE2EDuration="6.744600071s" podCreationTimestamp="2026-02-14 06:03:37 +0000 UTC" firstStartedPulling="2026-02-14 06:03:39.664761905 +0000 UTC m=+6851.745699219" lastFinishedPulling="2026-02-14 06:03:43.079466496 +0000 UTC m=+6855.160403810" observedRunningTime="2026-02-14 06:03:43.739812936 +0000 UTC m=+6855.820750250" watchObservedRunningTime="2026-02-14 06:03:43.744600071 +0000 UTC m=+6855.825537385" Feb 14 06:03:48 crc kubenswrapper[4867]: I0214 06:03:48.151331 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:48 crc kubenswrapper[4867]: I0214 06:03:48.157098 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:48 crc kubenswrapper[4867]: I0214 06:03:48.214187 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:48 crc kubenswrapper[4867]: I0214 06:03:48.865883 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:48 crc kubenswrapper[4867]: I0214 06:03:48.947603 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-skdnt"] Feb 14 06:03:50 crc kubenswrapper[4867]: I0214 06:03:50.805399 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-skdnt" podUID="1f920796-3206-4c6a-ad78-e8a2b2c07c79" containerName="registry-server" containerID="cri-o://6771b5c9272a56fa84c233588bcb4b4619981c67cfc8b4f6090cd4151faec5c9" gracePeriod=2 Feb 14 06:03:50 crc kubenswrapper[4867]: I0214 06:03:50.870822 4867 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nb97r"] Feb 14 06:03:50 crc kubenswrapper[4867]: I0214 06:03:50.873452 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:03:50 crc kubenswrapper[4867]: I0214 06:03:50.893715 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nb97r"] Feb 14 06:03:50 crc kubenswrapper[4867]: I0214 06:03:50.990365 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d8ca39c-0068-495f-97b4-5da29e98c60d-catalog-content\") pod \"redhat-marketplace-nb97r\" (UID: \"9d8ca39c-0068-495f-97b4-5da29e98c60d\") " pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:03:50 crc kubenswrapper[4867]: I0214 06:03:50.990633 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d8ca39c-0068-495f-97b4-5da29e98c60d-utilities\") pod \"redhat-marketplace-nb97r\" (UID: \"9d8ca39c-0068-495f-97b4-5da29e98c60d\") " pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:03:50 crc kubenswrapper[4867]: I0214 06:03:50.990677 4867 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxfwn\" (UniqueName: \"kubernetes.io/projected/9d8ca39c-0068-495f-97b4-5da29e98c60d-kube-api-access-jxfwn\") pod \"redhat-marketplace-nb97r\" (UID: \"9d8ca39c-0068-495f-97b4-5da29e98c60d\") " pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.093278 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d8ca39c-0068-495f-97b4-5da29e98c60d-catalog-content\") pod \"redhat-marketplace-nb97r\" (UID: \"9d8ca39c-0068-495f-97b4-5da29e98c60d\") " pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.093411 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d8ca39c-0068-495f-97b4-5da29e98c60d-utilities\") pod \"redhat-marketplace-nb97r\" (UID: \"9d8ca39c-0068-495f-97b4-5da29e98c60d\") " pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.093439 4867 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxfwn\" (UniqueName: \"kubernetes.io/projected/9d8ca39c-0068-495f-97b4-5da29e98c60d-kube-api-access-jxfwn\") pod \"redhat-marketplace-nb97r\" (UID: \"9d8ca39c-0068-495f-97b4-5da29e98c60d\") " pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.094199 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d8ca39c-0068-495f-97b4-5da29e98c60d-catalog-content\") pod \"redhat-marketplace-nb97r\" (UID: \"9d8ca39c-0068-495f-97b4-5da29e98c60d\") " pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.094333 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d8ca39c-0068-495f-97b4-5da29e98c60d-utilities\") pod \"redhat-marketplace-nb97r\" (UID: \"9d8ca39c-0068-495f-97b4-5da29e98c60d\") " pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.122669 4867 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxfwn\" (UniqueName: \"kubernetes.io/projected/9d8ca39c-0068-495f-97b4-5da29e98c60d-kube-api-access-jxfwn\") pod \"redhat-marketplace-nb97r\" (UID: \"9d8ca39c-0068-495f-97b4-5da29e98c60d\") " pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.324314 4867 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.544716 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.711018 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f920796-3206-4c6a-ad78-e8a2b2c07c79-catalog-content\") pod \"1f920796-3206-4c6a-ad78-e8a2b2c07c79\" (UID: \"1f920796-3206-4c6a-ad78-e8a2b2c07c79\") " Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.711215 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f920796-3206-4c6a-ad78-e8a2b2c07c79-utilities\") pod \"1f920796-3206-4c6a-ad78-e8a2b2c07c79\" (UID: \"1f920796-3206-4c6a-ad78-e8a2b2c07c79\") " Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.711332 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9kcl\" (UniqueName: \"kubernetes.io/projected/1f920796-3206-4c6a-ad78-e8a2b2c07c79-kube-api-access-f9kcl\") pod \"1f920796-3206-4c6a-ad78-e8a2b2c07c79\" (UID: \"1f920796-3206-4c6a-ad78-e8a2b2c07c79\") " Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.713471 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f920796-3206-4c6a-ad78-e8a2b2c07c79-utilities" (OuterVolumeSpecName: "utilities") pod "1f920796-3206-4c6a-ad78-e8a2b2c07c79" (UID: "1f920796-3206-4c6a-ad78-e8a2b2c07c79"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.718639 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f920796-3206-4c6a-ad78-e8a2b2c07c79-kube-api-access-f9kcl" (OuterVolumeSpecName: "kube-api-access-f9kcl") pod "1f920796-3206-4c6a-ad78-e8a2b2c07c79" (UID: "1f920796-3206-4c6a-ad78-e8a2b2c07c79"). InnerVolumeSpecName "kube-api-access-f9kcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.760477 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f920796-3206-4c6a-ad78-e8a2b2c07c79-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f920796-3206-4c6a-ad78-e8a2b2c07c79" (UID: "1f920796-3206-4c6a-ad78-e8a2b2c07c79"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.814744 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9kcl\" (UniqueName: \"kubernetes.io/projected/1f920796-3206-4c6a-ad78-e8a2b2c07c79-kube-api-access-f9kcl\") on node \"crc\" DevicePath \"\"" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.814781 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f920796-3206-4c6a-ad78-e8a2b2c07c79-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.814794 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f920796-3206-4c6a-ad78-e8a2b2c07c79-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.821202 4867 generic.go:334] "Generic (PLEG): container finished" podID="1f920796-3206-4c6a-ad78-e8a2b2c07c79" containerID="6771b5c9272a56fa84c233588bcb4b4619981c67cfc8b4f6090cd4151faec5c9" exitCode=0 Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.821246 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-skdnt" event={"ID":"1f920796-3206-4c6a-ad78-e8a2b2c07c79","Type":"ContainerDied","Data":"6771b5c9272a56fa84c233588bcb4b4619981c67cfc8b4f6090cd4151faec5c9"} Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.821279 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-skdnt" event={"ID":"1f920796-3206-4c6a-ad78-e8a2b2c07c79","Type":"ContainerDied","Data":"67d6cd647c70f60815e5468464561e04331f9bada9a699f6c2a9522d742b4aec"} Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.821298 4867 scope.go:117] "RemoveContainer" containerID="6771b5c9272a56fa84c233588bcb4b4619981c67cfc8b4f6090cd4151faec5c9" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.821336 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-skdnt" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.843087 4867 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nb97r"] Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.844932 4867 scope.go:117] "RemoveContainer" containerID="270740dc153ad9475050bc2543190184b7cbfe3e2e0f4304c8365db7d151b101" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.864771 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-skdnt"] Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.878131 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-skdnt"] Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.907821 4867 scope.go:117] "RemoveContainer" containerID="fc34ec4dacce9de548210d3888519f4ed35c73971d99af5acd89710e680fde95" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.948573 4867 scope.go:117] "RemoveContainer" containerID="6771b5c9272a56fa84c233588bcb4b4619981c67cfc8b4f6090cd4151faec5c9" Feb 14 06:03:51 crc kubenswrapper[4867]: E0214 06:03:51.949997 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6771b5c9272a56fa84c233588bcb4b4619981c67cfc8b4f6090cd4151faec5c9\": container with ID starting with 6771b5c9272a56fa84c233588bcb4b4619981c67cfc8b4f6090cd4151faec5c9 not found: ID does not exist" containerID="6771b5c9272a56fa84c233588bcb4b4619981c67cfc8b4f6090cd4151faec5c9" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.950060 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6771b5c9272a56fa84c233588bcb4b4619981c67cfc8b4f6090cd4151faec5c9"} err="failed to get container status \"6771b5c9272a56fa84c233588bcb4b4619981c67cfc8b4f6090cd4151faec5c9\": rpc error: code = NotFound desc = could not find container \"6771b5c9272a56fa84c233588bcb4b4619981c67cfc8b4f6090cd4151faec5c9\": container with ID starting with 6771b5c9272a56fa84c233588bcb4b4619981c67cfc8b4f6090cd4151faec5c9 not found: ID does not exist" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.950098 4867 scope.go:117] "RemoveContainer" containerID="270740dc153ad9475050bc2543190184b7cbfe3e2e0f4304c8365db7d151b101" Feb 14 06:03:51 crc kubenswrapper[4867]: E0214 06:03:51.950392 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"270740dc153ad9475050bc2543190184b7cbfe3e2e0f4304c8365db7d151b101\": container with ID starting with 270740dc153ad9475050bc2543190184b7cbfe3e2e0f4304c8365db7d151b101 not found: ID does not exist" containerID="270740dc153ad9475050bc2543190184b7cbfe3e2e0f4304c8365db7d151b101" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.950429 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"270740dc153ad9475050bc2543190184b7cbfe3e2e0f4304c8365db7d151b101"} err="failed to get container status \"270740dc153ad9475050bc2543190184b7cbfe3e2e0f4304c8365db7d151b101\": rpc error: code = NotFound desc = could not find container \"270740dc153ad9475050bc2543190184b7cbfe3e2e0f4304c8365db7d151b101\": container with ID starting with 270740dc153ad9475050bc2543190184b7cbfe3e2e0f4304c8365db7d151b101 not found: ID does not exist" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.950452 4867 scope.go:117] "RemoveContainer" containerID="fc34ec4dacce9de548210d3888519f4ed35c73971d99af5acd89710e680fde95" Feb 14 06:03:51 crc kubenswrapper[4867]: E0214 06:03:51.950629 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc34ec4dacce9de548210d3888519f4ed35c73971d99af5acd89710e680fde95\": container with ID starting with fc34ec4dacce9de548210d3888519f4ed35c73971d99af5acd89710e680fde95 not found: ID does not exist" containerID="fc34ec4dacce9de548210d3888519f4ed35c73971d99af5acd89710e680fde95" Feb 14 06:03:51 crc kubenswrapper[4867]: I0214 06:03:51.950645 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc34ec4dacce9de548210d3888519f4ed35c73971d99af5acd89710e680fde95"} err="failed to get container status \"fc34ec4dacce9de548210d3888519f4ed35c73971d99af5acd89710e680fde95\": rpc error: code = NotFound desc = could not find container \"fc34ec4dacce9de548210d3888519f4ed35c73971d99af5acd89710e680fde95\": container with ID starting with fc34ec4dacce9de548210d3888519f4ed35c73971d99af5acd89710e680fde95 not found: ID does not exist" Feb 14 06:03:52 crc kubenswrapper[4867]: I0214 06:03:52.841394 4867 generic.go:334] "Generic (PLEG): container finished" podID="9d8ca39c-0068-495f-97b4-5da29e98c60d" containerID="413f4dc3289647869b4e0a78505be1e59030680c66702a8177e82a0cff56b430" exitCode=0 Feb 14 06:03:52 crc kubenswrapper[4867]: I0214 06:03:52.841458 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb97r" event={"ID":"9d8ca39c-0068-495f-97b4-5da29e98c60d","Type":"ContainerDied","Data":"413f4dc3289647869b4e0a78505be1e59030680c66702a8177e82a0cff56b430"} Feb 14 06:03:52 crc kubenswrapper[4867]: I0214 06:03:52.841527 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb97r" event={"ID":"9d8ca39c-0068-495f-97b4-5da29e98c60d","Type":"ContainerStarted","Data":"d6d2b1021653408d35b901acf6093bc305e2a51d6c61afc86f9c705075f31a80"} Feb 14 06:03:53 crc kubenswrapper[4867]: I0214 06:03:53.019238 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f920796-3206-4c6a-ad78-e8a2b2c07c79" path="/var/lib/kubelet/pods/1f920796-3206-4c6a-ad78-e8a2b2c07c79/volumes" Feb 14 06:03:53 crc kubenswrapper[4867]: I0214 06:03:53.857599 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb97r" event={"ID":"9d8ca39c-0068-495f-97b4-5da29e98c60d","Type":"ContainerStarted","Data":"0672b088860a536354dcbeae22dec1fcbace310acd6427ef724105d24287fc4d"} Feb 14 06:03:54 crc kubenswrapper[4867]: I0214 06:03:54.875841 4867 generic.go:334] "Generic (PLEG): container finished" podID="9d8ca39c-0068-495f-97b4-5da29e98c60d" containerID="0672b088860a536354dcbeae22dec1fcbace310acd6427ef724105d24287fc4d" exitCode=0 Feb 14 06:03:54 crc kubenswrapper[4867]: I0214 06:03:54.876088 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb97r" event={"ID":"9d8ca39c-0068-495f-97b4-5da29e98c60d","Type":"ContainerDied","Data":"0672b088860a536354dcbeae22dec1fcbace310acd6427ef724105d24287fc4d"} Feb 14 06:03:55 crc kubenswrapper[4867]: I0214 06:03:55.893488 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb97r" event={"ID":"9d8ca39c-0068-495f-97b4-5da29e98c60d","Type":"ContainerStarted","Data":"c238cf32431b5fdbf0d046a658827e69d531fa80e0765bdb709fdb0e7d84ff73"} Feb 14 06:03:55 crc kubenswrapper[4867]: I0214 06:03:55.938089 4867 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nb97r" podStartSLOduration=3.50076664 podStartE2EDuration="5.938067281s" podCreationTimestamp="2026-02-14 06:03:50 +0000 UTC" firstStartedPulling="2026-02-14 06:03:52.845810642 +0000 UTC m=+6864.926747956" lastFinishedPulling="2026-02-14 06:03:55.283111283 +0000 UTC m=+6867.364048597" observedRunningTime="2026-02-14 06:03:55.931595061 +0000 UTC m=+6868.012532375" watchObservedRunningTime="2026-02-14 06:03:55.938067281 +0000 UTC m=+6868.019004595" Feb 14 06:04:01 crc kubenswrapper[4867]: I0214 06:04:01.325058 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:04:01 crc kubenswrapper[4867]: I0214 06:04:01.325946 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:04:01 crc kubenswrapper[4867]: I0214 06:04:01.396759 4867 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:04:02 crc kubenswrapper[4867]: I0214 06:04:02.048985 4867 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:04:02 crc kubenswrapper[4867]: I0214 06:04:02.117415 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nb97r"] Feb 14 06:04:03 crc kubenswrapper[4867]: I0214 06:04:03.999064 4867 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nb97r" podUID="9d8ca39c-0068-495f-97b4-5da29e98c60d" containerName="registry-server" containerID="cri-o://c238cf32431b5fdbf0d046a658827e69d531fa80e0765bdb709fdb0e7d84ff73" gracePeriod=2 Feb 14 06:04:04 crc kubenswrapper[4867]: I0214 06:04:04.582761 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:04:04 crc kubenswrapper[4867]: I0214 06:04:04.675621 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d8ca39c-0068-495f-97b4-5da29e98c60d-utilities\") pod \"9d8ca39c-0068-495f-97b4-5da29e98c60d\" (UID: \"9d8ca39c-0068-495f-97b4-5da29e98c60d\") " Feb 14 06:04:04 crc kubenswrapper[4867]: I0214 06:04:04.675706 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d8ca39c-0068-495f-97b4-5da29e98c60d-catalog-content\") pod \"9d8ca39c-0068-495f-97b4-5da29e98c60d\" (UID: \"9d8ca39c-0068-495f-97b4-5da29e98c60d\") " Feb 14 06:04:04 crc kubenswrapper[4867]: I0214 06:04:04.676027 4867 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxfwn\" (UniqueName: \"kubernetes.io/projected/9d8ca39c-0068-495f-97b4-5da29e98c60d-kube-api-access-jxfwn\") pod \"9d8ca39c-0068-495f-97b4-5da29e98c60d\" (UID: \"9d8ca39c-0068-495f-97b4-5da29e98c60d\") " Feb 14 06:04:04 crc kubenswrapper[4867]: I0214 06:04:04.676572 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d8ca39c-0068-495f-97b4-5da29e98c60d-utilities" (OuterVolumeSpecName: "utilities") pod "9d8ca39c-0068-495f-97b4-5da29e98c60d" (UID: "9d8ca39c-0068-495f-97b4-5da29e98c60d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 06:04:04 crc kubenswrapper[4867]: I0214 06:04:04.677258 4867 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d8ca39c-0068-495f-97b4-5da29e98c60d-utilities\") on node \"crc\" DevicePath \"\"" Feb 14 06:04:04 crc kubenswrapper[4867]: I0214 06:04:04.688213 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d8ca39c-0068-495f-97b4-5da29e98c60d-kube-api-access-jxfwn" (OuterVolumeSpecName: "kube-api-access-jxfwn") pod "9d8ca39c-0068-495f-97b4-5da29e98c60d" (UID: "9d8ca39c-0068-495f-97b4-5da29e98c60d"). InnerVolumeSpecName "kube-api-access-jxfwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 14 06:04:04 crc kubenswrapper[4867]: I0214 06:04:04.714314 4867 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d8ca39c-0068-495f-97b4-5da29e98c60d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d8ca39c-0068-495f-97b4-5da29e98c60d" (UID: "9d8ca39c-0068-495f-97b4-5da29e98c60d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 14 06:04:04 crc kubenswrapper[4867]: I0214 06:04:04.780156 4867 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d8ca39c-0068-495f-97b4-5da29e98c60d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 14 06:04:04 crc kubenswrapper[4867]: I0214 06:04:04.780192 4867 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxfwn\" (UniqueName: \"kubernetes.io/projected/9d8ca39c-0068-495f-97b4-5da29e98c60d-kube-api-access-jxfwn\") on node \"crc\" DevicePath \"\"" Feb 14 06:04:05 crc kubenswrapper[4867]: I0214 06:04:05.016324 4867 generic.go:334] "Generic (PLEG): container finished" podID="9d8ca39c-0068-495f-97b4-5da29e98c60d" containerID="c238cf32431b5fdbf0d046a658827e69d531fa80e0765bdb709fdb0e7d84ff73" exitCode=0 Feb 14 06:04:05 crc kubenswrapper[4867]: I0214 06:04:05.016669 4867 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nb97r" Feb 14 06:04:05 crc kubenswrapper[4867]: I0214 06:04:05.018389 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb97r" event={"ID":"9d8ca39c-0068-495f-97b4-5da29e98c60d","Type":"ContainerDied","Data":"c238cf32431b5fdbf0d046a658827e69d531fa80e0765bdb709fdb0e7d84ff73"} Feb 14 06:04:05 crc kubenswrapper[4867]: I0214 06:04:05.018465 4867 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb97r" event={"ID":"9d8ca39c-0068-495f-97b4-5da29e98c60d","Type":"ContainerDied","Data":"d6d2b1021653408d35b901acf6093bc305e2a51d6c61afc86f9c705075f31a80"} Feb 14 06:04:05 crc kubenswrapper[4867]: I0214 06:04:05.018531 4867 scope.go:117] "RemoveContainer" containerID="c238cf32431b5fdbf0d046a658827e69d531fa80e0765bdb709fdb0e7d84ff73" Feb 14 06:04:05 crc kubenswrapper[4867]: I0214 06:04:05.065625 4867 scope.go:117] "RemoveContainer" containerID="0672b088860a536354dcbeae22dec1fcbace310acd6427ef724105d24287fc4d" Feb 14 06:04:05 crc kubenswrapper[4867]: I0214 06:04:05.096208 4867 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nb97r"] Feb 14 06:04:05 crc kubenswrapper[4867]: I0214 06:04:05.103631 4867 scope.go:117] "RemoveContainer" containerID="413f4dc3289647869b4e0a78505be1e59030680c66702a8177e82a0cff56b430" Feb 14 06:04:05 crc kubenswrapper[4867]: I0214 06:04:05.112781 4867 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nb97r"] Feb 14 06:04:05 crc kubenswrapper[4867]: I0214 06:04:05.176410 4867 scope.go:117] "RemoveContainer" containerID="c238cf32431b5fdbf0d046a658827e69d531fa80e0765bdb709fdb0e7d84ff73" Feb 14 06:04:05 crc kubenswrapper[4867]: E0214 06:04:05.177005 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c238cf32431b5fdbf0d046a658827e69d531fa80e0765bdb709fdb0e7d84ff73\": container with ID starting with c238cf32431b5fdbf0d046a658827e69d531fa80e0765bdb709fdb0e7d84ff73 not found: ID does not exist" containerID="c238cf32431b5fdbf0d046a658827e69d531fa80e0765bdb709fdb0e7d84ff73" Feb 14 06:04:05 crc kubenswrapper[4867]: I0214 06:04:05.177039 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c238cf32431b5fdbf0d046a658827e69d531fa80e0765bdb709fdb0e7d84ff73"} err="failed to get container status \"c238cf32431b5fdbf0d046a658827e69d531fa80e0765bdb709fdb0e7d84ff73\": rpc error: code = NotFound desc = could not find container \"c238cf32431b5fdbf0d046a658827e69d531fa80e0765bdb709fdb0e7d84ff73\": container with ID starting with c238cf32431b5fdbf0d046a658827e69d531fa80e0765bdb709fdb0e7d84ff73 not found: ID does not exist" Feb 14 06:04:05 crc kubenswrapper[4867]: I0214 06:04:05.177060 4867 scope.go:117] "RemoveContainer" containerID="0672b088860a536354dcbeae22dec1fcbace310acd6427ef724105d24287fc4d" Feb 14 06:04:05 crc kubenswrapper[4867]: E0214 06:04:05.177391 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0672b088860a536354dcbeae22dec1fcbace310acd6427ef724105d24287fc4d\": container with ID starting with 0672b088860a536354dcbeae22dec1fcbace310acd6427ef724105d24287fc4d not found: ID does not exist" containerID="0672b088860a536354dcbeae22dec1fcbace310acd6427ef724105d24287fc4d" Feb 14 06:04:05 crc kubenswrapper[4867]: I0214 06:04:05.177426 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0672b088860a536354dcbeae22dec1fcbace310acd6427ef724105d24287fc4d"} err="failed to get container status \"0672b088860a536354dcbeae22dec1fcbace310acd6427ef724105d24287fc4d\": rpc error: code = NotFound desc = could not find container \"0672b088860a536354dcbeae22dec1fcbace310acd6427ef724105d24287fc4d\": container with ID starting with 0672b088860a536354dcbeae22dec1fcbace310acd6427ef724105d24287fc4d not found: ID does not exist" Feb 14 06:04:05 crc kubenswrapper[4867]: I0214 06:04:05.177445 4867 scope.go:117] "RemoveContainer" containerID="413f4dc3289647869b4e0a78505be1e59030680c66702a8177e82a0cff56b430" Feb 14 06:04:05 crc kubenswrapper[4867]: E0214 06:04:05.177906 4867 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"413f4dc3289647869b4e0a78505be1e59030680c66702a8177e82a0cff56b430\": container with ID starting with 413f4dc3289647869b4e0a78505be1e59030680c66702a8177e82a0cff56b430 not found: ID does not exist" containerID="413f4dc3289647869b4e0a78505be1e59030680c66702a8177e82a0cff56b430" Feb 14 06:04:05 crc kubenswrapper[4867]: I0214 06:04:05.177939 4867 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"413f4dc3289647869b4e0a78505be1e59030680c66702a8177e82a0cff56b430"} err="failed to get container status \"413f4dc3289647869b4e0a78505be1e59030680c66702a8177e82a0cff56b430\": rpc error: code = NotFound desc = could not find container \"413f4dc3289647869b4e0a78505be1e59030680c66702a8177e82a0cff56b430\": container with ID starting with 413f4dc3289647869b4e0a78505be1e59030680c66702a8177e82a0cff56b430 not found: ID does not exist" Feb 14 06:04:07 crc kubenswrapper[4867]: I0214 06:04:07.022310 4867 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d8ca39c-0068-495f-97b4-5da29e98c60d" path="/var/lib/kubelet/pods/9d8ca39c-0068-495f-97b4-5da29e98c60d/volumes" Feb 14 06:04:31 crc kubenswrapper[4867]: I0214 06:04:31.251232 4867 patch_prober.go:28] interesting pod/machine-config-daemon-4s95t container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 14 06:04:31 crc kubenswrapper[4867]: I0214 06:04:31.252055 4867 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4s95t" podUID="5992e46c-bce7-4b9f-82f2-c7ffb93286cd" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"